Blog Archives

R script to calculate QIC for Generalized Estimating Equation (GEE) Model Selection

[UPDATE: IMPROVED CODE AND EXTENSIONS ARE NOW AVAILABLE ON https://github.com/djhocking/qicpack INCLUDING AS AN R PACKAGE]

Generalized Estimating Equations (GEE) can be used to analyze longitudinal count data; that is, repeated counts taken from the same subject or site. This is often referred to as repeated measures data, but longitudinal data often has more repeated observations. Longitudinal data arises from studies in virtually all branches of science. In psychology or medicine, repeated measurements are taken on the same patients over time. In sociology, schools or other social distinct groups are observed over time. In my field, ecology, we frequently record data from the same plants or animals repeated over time. Furthermore, the repeated measures don’t have to be separated in time. A researcher could take multiple tissue samples from the same subject at a given time. I often repeatedly visit the same field sites (e.g. same patch of forest) over time. If the data are discrete counts of things (e.g. number of red blood cells, number of acorns, number of frogs), the data will generally follow a Poisson distribution.

Longitudinal count data, following a Poisson distribution, can be analyzed with Generalized Linear Mixed Models (GLMM) or with GEE. I won’t get into the computational or philosophical differences between conditional, subject-specific estimates associated with GLMM and marginal, population-level estimates obtained by GEE in this post. However, if you decide that GEE is right for you (I have a paper in preparation comparing GLMM and GEE), you may also want to compare multiple GEE models. Unlike GLMM, GEE does not use full likelihood estimates, but rather, relies on a quasi-likelihood function. Therefore, the popular AIC approach to model selection don’t apply to GEE models. Luckily, Pan (2001) developed an equivalent QIC for model comparison. Like AIC, it balances the model fit with model complexity to pick the most parsimonious model.

Unfortunately, there is currently no QIC package in R for GEE models. geepack is a popular R package for GEE analysis. So, I wrote the short R script below to calculate Pan’s QIC statistic from the output of a GEE model run in geepack using the geese function. It currently employs the Moore-Penrose Generalized Matrix Inverse through the MASS package. I left in my original code using the identity matrix but it is preceded by a pound sign so it doesn’t run. [edition: April 10, 2012] The input for the QIC function needs to come from the geeglm function (as opposed to “geese”) within geepack.

I hope you find it useful. I’m still fairly new to R and this is one of my first custom functions, so let me know if you have problems using it or if there are places it can be improved. If you decide to use this for analysis in a publication, please let me know just for my own curiosity (and ego boost!).

###################################################################################### QIC for GEE models# Daniel J. Hocking# 07 February 2012# Refs:
  # Pan (2001)
  # Liang and Zeger (1986)
  # Zeger and Liang (1986)
  # Hardin and Hilbe (2003)
  # Dornmann et al 2007
  # # http://www.unc.edu/courses/2010spring/ecol/562/001/docs/lectures/lecture14.htm###################################################################################### Poisson QIC for geese{geepack} output# Ref: Pan (2001)
QIC.pois.geeglm <-function(model.R, model.indep){
  library(MASS)
  # Fitted and observed values for quasi likelihood
  mu.R <- model.R$fitted.values
  # alt: X <- model.matrix(model.R)
      #  names(model.R$coefficients) <- NULL
      #  beta.R <- model.R$coefficients
      #  mu.R <- exp(X %*% beta.R)
  y <- model.R$y

  # Quasi Likelihood for Poisson
  quasi.R <- sum((y*log(mu.R))- mu.R)# poisson()$dev.resids - scale and weights = 1
 
  # Trace Term (penalty for model complexity)
  AIinverse<- ginv(model.indep$geese$vbeta.naiv)# Omega-hat(I) via Moore-Penrose 
generalized inverse of a matrix in MASS package
  # Alt: AIinverse <- solve(model.indep$geese$vbeta.naiv) # solve via identity
  Vr<- model.R$geese$vbeta
  trace.R <- sum(diag(AIinverse%*%Vr))
  px <- length(mu.R)# number non-redunant columns in design matrix

  # QIC
  QIC <-(-2)*quasi.R +2*trace.R
  QICu<-(-2)*quasi.R +2*px    # Approximation assuming model structured correctly 
  output <- c(QIC,QICu, quasi.R, trace.R, px)
  names(output)<- c('QIC','QICu','Quasi Lik','Trace','px')
  output}

GLMM Hell

I have been starting to analyze some data I have of repeated counts of salamanders from 5 plots over 4 years. I am trying to develop a predictive model of salamander nighttime surface activity as a function of weather variables. The repeated counting leads to the need for Generalized Linear Mixed Models (GLMM). Count data often results in data that are best described with a Poisson distribution, hence the “generalized” term. Because the counts were repeated on the same plots, plot needs to be considered a random effect. If the plot term was not included in this way it would suggest that all the counts were independent but in reality counts on one plot over time are likely to have some correlation that needs to be accounted for to avoid pseudoreplication. So I am stuck with a GLMM. The problem with GLMM in a frequentist statistical framework is that they are difficult to analyze. Bolker and colleagues give the best overview of the analysis process and it’s challenges in: Generalized Linear Mixed Models: A Practical Guide for Ecology and Evolution. They do have an online supplement to that paper that provides a workthrough example complete with R code using the lme4 package. I HIGHLY recommend everyone read Bolker’s paper if considering using GLMMs. Personally, I like the idea of analyzing GLMMs with Bayesian statistics rather than traditional frequentist stats. Below are a few emails that I’ve recently been exchanging with colleagues regarding GLMM. Let me know what you think.

Question About Selection of Correlated Predictor Variables and Model Selection:
 How much correlation among independent variables is too much in a GLMM? If I have correlation in the variables does it affect the interpretation or model selection?

Answer from a Statistician Friend:
0.8 and above is high and often one variable can be replaced by the other, and
both are not necessary in the model.

Below 0.7 typically both variables are needed for a good model fit.
I usually use stepAIC (from the MASS package in R) for model selection.

The difficulty comes in interpreting the regression coefficients: with correlation in the predictor variables, the variable that appears first
on the model statement usually gets the larger absolute value, whereas
the other variable has a smaller (in absolute value) coefficient.
Remember the interpretation of regression coefficients: the change
in the response per unit increase GIVEN ALL THE OTHER VARIABLEs IN THE
MODEL.

If you want coefficients that represent “additive” contributions to the
variation in the response (regardless of the order in which predictors
appear in the model statement), and if you have considerable multicollinearity
you might want to consider doing a principal component regression with all
or perhaps with only a subgroup of similar predictor variables.

As with most issues in statistics, there is not a clear-cut hard-fact simple
answer. Live would be simpler if there was….

Question of GLMM Bayesian Approach:
Hey Dan – I’m using GLMM b/c I have a repeated-measures design, count data response (negative binomial distribution), etc. I’m finding admb in R is doing the job – and I read the article you mentioned a few months back, when I started considering GLMMs…

I have never worked with Bayesian stats and wouldn’t even know where to begin. Do you have any recommendations for overview reading, and can I analyze a repeated-measures design (i.e., is there a way to cope with random factors)?

My Response:
My data sounds very similar to yours. I usually use lmer in the lme4 package. Right now I am just essentially copying the code in Bolker et al 2009 from the online supplements in the TREE paper previously mentioned. I have never see the admb package and will have to check it out. I’ve tried glmmPQL and glmmML but there are more examples in lmer and it’s Splus predecessor. I am annoyed that in Zuur et al. “Mixed Effects Models and Extensions in Ecology with R” they don’t spend much time on model assumptions or model comparison. I feel like they show users how to do the analysis but not how to evaluate it. Pinheiro and Bates do a better job in “Mixed-Effects Models in S and S-Plus” but they focus on linear mixed models and non-linear mixed models and less on GLMM. Plus the code is similar to but differs enough from R that it can be challenging to use at times. The “SAS for Mixed Models” book is good but SAS isn’t free and the code isn’t as transparent to me. Plus it doesn’t have good graphics so I prefer R.

Anyway, Bayesian stats have their own can of worms but I find it more intuitively appealing and I like the transparency in the code using WinBUGS (no Mac version) called from R. There are two very good, practical books to get started. McCarthy presents a good overview and introduction to bayesian stats in “Bayesian Methods for Ecology” but the examples don’t get very advanced. Personally I recommend getting that from the library and reading the first few chapters. I would then buy Marc Kery’s excellent book, “Introduction to WinBUGS for Ecologists.” It is very well written and has a wider range of examples that typically relate to many animal ecology studies. Clark and Gelfand have a decent modeling book with Bayesian analysis in R examples but it’s more ecosystem/environmentally oriented than animal ecology.

Bayesian analysis treats all factors sort of like random variables from population distributions. Therefore there is not need for explicit random vs. fixed delineation. You get estimates and credibility intervals for all variables. You can essentially write the same GLMM model and then analyze it in a Bayesian framework. The big difference in the philosophy behind frequentist vs Bayesian statistics. Bayesians use prior information (even noninformative priors contain information on the underlying distributions). Some scientists are opposed to this but for various reasons that I won’t go into now, I like it. Some people do want a sensitivity analysis to go along with Bayesian analysis to determine the influence of the priors. I might go as far as to say that in the case of GLMM type data Bayesian statistics are more sound (robust?) than frequentist methods but they differ significantly from a philosophical standpoint.

Anyway, I hope that helps.

http://rcm.amazon.com/e/cm?t=run00e-20&o=1&p=8&l=bpl&asins=0691125228&fc1=000000&IS2=1&lt1=_blank&m=amazon&lc1=0000FF&bc1=000000&bg1=FFFFFF&f=ifr
http://rcm.amazon.com/e/cm?t=run00e-20&o=1&p=8&l=bpl&asins=019856967X&fc1=000000&IS2=1&lt1=_blank&m=amazon&lc1=0000FF&bc1=000000&bg1=FFFFFF&f=ifr
http://rcm.amazon.com/e/cm?t=run00e-20&o=1&p=8&l=bpl&asins=0123786053&fc1=000000&IS2=1&lt1=_blank&m=amazon&lc1=0000FF&bc1=000000&bg1=FFFFFF&f=ifr

Follow

Get every new post delivered to your Inbox.

Join 51 other followers