Plotting 95% Confidence Bands in R


I am comparing estimates from subject-specific GLMMs and population-average GEE models as part of a publication I am working on. As part of this, I want to visualize predictions of each type of model including 95% confidence bands.

First I had to make a new data set for prediction. I could have compared fitted values with confidence intervals but I am specifically interested in comparing predictions for particular variables while holding others constant. For example, soil temperature is especially important for salamanders, so I am interested in the predicted effects of soil temperature from the different models. I used the expand.grid and model.matrix functions in R to generate a new data set where soil temperature varied from 0 to 30 C. The other variables were held constant at their mean levels during the study. Because of the nature of the contrast argument in the model.matrix function, I had to include more than one level of the factor “season”. I then removed all season except spring. In effect I am asking, what is the effect of soil temperature on salamander activity during the spring when the other conditions are constant (e.g. windspeed = 1.0 m/s, rain in past 24 hours =  This code is based on code from Ben Bolker via http://glmm.wikidot.com

# Compare Effects of SoilT with 95% CIs
formula(glmm1)
newdat.soil <- expand.grid(
SoilT = seq(0, 30, 1),
RainAmt24 = mean(RainAmt24),
RH = mean(RH),
windspeed = mean(windspeed),
season = c("spring", "summer", "fall"),
droughtdays = mean(droughtdays),
count = 0
)
newdat.soil$SoilT2 <- newdat.soil$SoilT^2

# Spring
newdat.soil.spring <- newdat.soil[newdat.soil$season == 'spring', ]

mm = model.matrix(terms(glmm1), newdat.soil)

Next I calculated the 95% confidence intervals for both the GLMM and GEE models. For the GLMM the plo and phi are the low and high confidence intervals for the fixed effects assuming zero effect of the random sites. tlo and thi account for the uncertainty in the random effects.

newdat.soil$count = mm %*% fixef(glmm1)
pvar1 <- diag(mm %*% tcrossprod(vcov(glmm1),mm))
tvar1 <- pvar1+VarCorr(glmm1)$plot[1]
newdat.soil <- data.frame(
newdat.soil
, plo = newdat.soil$count-2*sqrt(pvar1)
, phi = newdat.soil$count+2*sqrt(pvar1)
, tlo = newdat.soil$count-2*sqrt(tvar1)
, thi = newdat.soil$count+2*sqrt(tvar1)
)
mm.geeEX = model.matrix(terms(geeEX), newdat.soil)
newdat.soil$count.gee = mm.geeEX %*% coef(geeEX)
tvar1.gee <- diag(mm.geeEX %*% tcrossprod(geeEX$geese$vbeta, mm.geeEX))
newdat.soil <- data.frame(
newdat.soil
, tlo.gee = newdat.soil$count-2*sqrt(tvar1.gee)
, thi.gee = newdat.soil$count+2*sqrt(tvar1.gee)
)

The standard error of the fixed effects are larger in the GEE model than in the GLMM, but when the variation associated with the random effects are accounted for, the uncertainty (95% CI) around the estimates is greater in the GLMM. This is especially evident when the estimated values are large since the random effects are exponential on the original scale. This can be seen in the below plots

Although this plot does the job, it isn’t an efficient use of space, nor is it easy to compare exactly where the different lines fall. It would be nice to plot everything on one set of axes. The only trouble is that all the lines could be difficult to see just using solid and dashed/dotted lines. To help with this, I combine the plots but added color and shading using the polygon function. The code and plot are below

plot(newdat.soil.spring$SoilT, exp(newdat.soil.spring$count.gee),
xlab = "Soil temperature (C)",
ylab = 'Predicted salamander observations',
type = 'l',
ylim = c(0, 25))
polygon(c(newdat.soil.spring$SoilT
, rev(newdat.soil.spring$SoilT))
, c(exp(newdat.soil.spring$thi.gee)
, rev(exp(newdat.soil.spring$tlo.gee)))
, col = 'grey'
, border = NA)
lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$thi.gee),
type = 'l',
lty = 2)
lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$tlo.gee),
type = 'l',
lty = 2)
lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$count.gee),
type = 'l',
lty = 1,
col = 2)
lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$count),
col = 1)
lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$thi),
type = 'l',
lty = 2)
lines(newdat.soil.spring$SoilT, exp(newdat.soil.spring$tlo),
type = 'l',
lty = 2)

GLMM vs GEE plot with 95% confidence intervals

Now you can directly compare the results of the GLMM and GEE models. The predicted values (population-averaged) for the GEE is represented by the red line, while the average (random effects = 0, just fixed effects) from the GLMM are represented by the solid black line. The dashed lines represent the 95% confidence intervals for the GLMM and the shaded area is the 95% confidence envelope for the GEE model. As you can see, the GEE has much higher confidence in it’s prediction of soil temperature effects on salamander surface activity than the GLMM model. This would not be apparent without visualizing the predictions with confidence intervals because the standard errors of the fixed effects were lower in the GLMM than in the GEE. This is because the SEs in the GEE include the site-level (random effect) variation while the GLMM SEs of the covariates do not include this variation and are interpreted as the effect of a change of 1 X on Y at a given site.

About these ads

About Daniel Hocking

I am a post-doctoral researcher at the UMass-Amherst. I am interested in the use of statistical models in ecology and population biology.

Posted on July 25, 2012, in GEE, GLMM, Modeling, R and tagged , , , , , , , , , , , , , , . Bookmark the permalink. 6 Comments.

  1. Isnt the difference in confidence due to the fact that the GEE makes inference at the population level whilst GLMM makes inference at the subject-specific level? So in away its silly to make this comparison as you are comparing the confidence of two completely different hypotheses?

    • Justin – thanks for the thoughts. I think you’re right. I’m still trying to wrap my head around the hypotheses tested by marginalized vs subject-specific stats. In ecology, it seems to me that most people use GLMM but interpret the results as if they were marginal. The CI around the mean GEE estimate represent the uncertainty in the average affect of X on Y. The GLMM model tests the effect of X on Y given the random effects (site or plot or year generally in my case). So the mean estimate is very similar to GEE because the random effects sum to zero. That is the effect of X on Y for an average site. Interpreting the uncertainty in each gets a bit more difficult for me. With a GEE it seems easy, it’s the uncertainty in the mean effect of X on Y across all sites. With the GLMM, there is the uncertainty in the effect of a change in X on Y for a typical site (random site effect = 0). This doesn’t include the variation across sites. With a random intercept model, X always has the same effect on Y. It gets more complicated with a random slope model where X can have a different effect on Y depending on site (the random variable).

      What if we’re interested in the uncertainty in the expected number of animals at a new site (using GEE or GLMM models for prediction)? In that case, we plug in the values for all of the X variables to get the expected (mean) prediction. But, we want to know what our confidence is in that prediction. I’ll have to think more about that, because then the handling of the uncertainty in the random effects will be very different using the two models. Do you think one more is more appropriate for prediction than the other?

  2. When I read through your code, it looks like you made a prediction interval for your GLMM and a confidence interval for your GEE. Is that right? Prediction intervals are pretty much always wider than confidence intervals.

    My guess is if you compared the same type of interval for each model, the GEE model would have the wider interval because I would expect the marginal variance of y|x to be bigger than the conditional variance of y|b, x. Interesting!

    • Hi Ariel – In some ways I’m comparing apples and oranges because (as you indicate) I’m comparing uncertainty around marginal and conditional estimates. I could marginalize the GLMM estimates but that would be difficult and I’m not sure how valid the uncertainty would be in that case. I’m not sure I understand how you’re differentiating the “prediction interval for your GLMM” and “confidence interval for your GEE.” Could you elaborate on that and how I would compare the same type of interval for each? Thanks for the comments and interest.

      • This gives a nice example of prediction vs confidence intervals and when you might want either: http://robjhyndman.com/hyndsight/intervals/.

        Confidence intervals take into account variation around the estimate of the mean. The variation changes depending on the values of the explanatory variables (see any regression textbook for the formula). With the code
        pvar1 <- diag(mm %*% tcrossprod(vcov(glmm1),mm))
        you are calculating the variance associated with the mean for each row of explanatory variables in the new dataset. You could then use the square root of these to make asymptotic confidence intervals.
        With the code
        tvar1 <- pvar1+VarCorr(glmm1)$plot[1]
        you are calculating the variance associated with a prediction for a new set of values of the explanatory variable and so you sum the variance associated with the estimate of the mean and the variance which should represent, essentially, sampling variation. This looks like code for a LMM, actually, not a GLMM. You can use the square root of this value to build asymptotic prediction intervals (the assumption of normality is essential for prediction intervals).

        For your GLMM, you create both variances but then appear to only use the prediction intervals you calculate (tlo, thi) rather than the confidence intervals (clo, chi) in your graphics. For your GEE, you only ever calculate the variance around the mean and so you only build confidence intervals (tlo.gee, thi.gee). You would need to figure out what you need to add to the variance to represent the sampling variation if you want prediction intervals. If this is Gaussian, you might be able to use the value on the diagonal of the covariance matrix.

        If you are working with non-Gaussian data and want prediction intervals you may want to do a little more research on how to do this properly as it can be rather less straightforward: https://stat.ethz.ch/pipermail/r-help/2003-May/033165.html.

  3. Ah, I sat down to use the code and see now that you are creating the marginal variances with your tvar1. In an LMM world I would be surprised that the intervals were so different because I would expect the marginal variances to be similar. But the GLMM world is so different that I agree with you that this might not be a useful comparison. Good luck with your project!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 48 other followers

%d bloggers like this: