Reef Fish

GLMM on symbiont effects on coral predation

Preliminaries

(Note that some of the functions used in this analysis are defined here. The original Sweave file from which this discussion is derived (with some ugly coding details that are suppressed here is here.)

The basic data can be reduced, for the purposes of this exercise, to a single treatment (ttt) [which actually consists of combinations of different symbionts: crab, shrimp, both or neither, but never mind]; a binary response (predation); and a blocking factor (block). Data entry/summary:

> x <- read.csv("culcitalogreg.csv")
> x$ttt <- as.factor(x$ttt) ## treatment should be a factor
> cmat <- matrix(c(0,1,1,1,0,1,-1,0,0,1,1,-2),ncol=3)
> ## cmat <- sweep(cmat,2,sqrt(colSums(cmat^2)),"/")
> ## normalize if desired: unnecessary
> contrasts(x$ttt) <- cmat
> ## contrasts: control vs symbiont (ttt1),
> ##            crab vs shrimp (ttt2),
> ##            1 vs 2 symbionts (ttt3)
> x$block <- as.factor(x$block) ## not really necessary but logical
> x <- x[,1:4]   ## don't really need the rest
> summary(x)
     block    ttt      predation                          ttt.1   
 1      : 8   1:20   Min.   :0.000   No Symbionts            :20  
 2      : 8   2:20   1st Qu.:0.000   Pair of Crabs           :20  
 3      : 8   3:20   Median :1.000   Pair of Shrimp          :20  
 4      : 8   4:20   Mean   :0.625   Pair of Shrimp and Crabs:20  
 5      : 8          3rd Qu.:1.000                                
 6      : 8          Max.   :1.000                                
 (Other):32                                                       
> set.seed(1001) ## will be doing some simulations later

Basic fits

glm (base R)

Fit without blocking (random effects):

> mod1 = glm(predation ~ ttt, binomial, data = x)

glmer (lme4)

Fitting with the default Laplace approximation and with adaptive Gauss-Hermite quadrature (abbreviated sometimes as AGQ and sometimes as (A)GHQ):

> library(lme4)
> ## Laplace
> mod2 <- glmer(predation~ttt+(1|block),family=binomial,data=x)
> ## AGQ 
> mod2B <- glmer(predation~ttt+(1|block),family=binomial,
+   nAGQ=8,data=x)

glmmML (glmmML)

The glmmML function is in a separate package (the glmmML package, surprisingly enough). (I chose nAGQ=8 above to match the default number of AGQ points (8) for glmmML with method="ghq").

> library(glmmML)
> ## Laplace (default)
> mod3=glmmML(predation~ttt,family=binomial,data=x,cluster=block)
info = 5
> ## AGQ (=GHQ) with 8 points
> mod3B=glmmML(predation~ttt,family=binomial,data=x,cluster=block,
+   method="ghq")

Laplace gives a warning:
In glmmML.fit(X, Y, weights, cluster.weights, start.coef, 
start.sigma : Hessian non-positive definite. No variance!

This means it can fit the model but fails to get (approximate) standard errors for the parameters. We'll decide later whether we want to worry about this.

glmmADMB

> library(glmmADMB)
================================================================================ 

Welcome to glmmADMB

================================================================================ 
> mod4 = glmm.admb(predation ~ ttt, random = ~1, group = "block", 
+     data = x, family = "binomial", link = "logit")

Can supposedly improve the Laplace fit with importance sampling, but I didn't really understand the meaning of the impSamp parameter from the documentation, and got errors when I tried setting it $>0$. (This is mentioned, but not discussed much in, the glmm.admb help page: probably need to see ref therein: Skaug and Fournier (2005). Automatic Evaluation of the Marginal Likelihood in Nonlinear Hierarchical Models. Unpublished available from: http://bemata.imr.no/laplace.pdf — this has about line about importance sampling, further referencing Skaug, H. J. (2002), "Automatic differentiation to facilitate maximum likelihood estimation in nonlinear random effects models", Journal of Computational and Graphical Statistics 11, 458-470.)

glmmBUGS

(Have to do this next bit carefully: both of the next two approaches load the
nlme package (used by glmmPQL). Since nlme and lme4 conflict, we have to unload/reload the packages.)

> detach("package:lme4")
> library(MASS)
> library(nlme)

The glmmBUGS package is supposed to translate a mixed-model statement into BUGS code which can then be run via R2WinBUGS. This doesn't quite work - first because the binomial-family specification doesn't get translated correctly (a dbin(x) specification should read dbin(x,1)), and then from an unknown BUGS problem \ldots
> library(glmmBUGS)
> gg1 <- glmmBUGS(predation ~ ttt, data = x, effects = "block", 
+     family = "binomial")
> startingValues <- gg1$startingValues
> source("getInits.R")
> library(R2WinBUGS)
> gg2 <- bugs(gg1$ragged, getInits, parameters.to.save = names(getInits()), 
+     working.directory = getwd(), model.file = "model.bug")

glmmPQL

Also worth trying with glmmPQL (which uses a less reliable algorithm, but is more flexible etc.):

> mod5 = glmmPQL(predation ~ ttt, random = ~1 | block, family = binomial, 
+     data = x)
> pqlfix <- fixef(mod5)
> pqlran <- as.numeric(VarCorr(mod5)["(Intercept)", "StdDev"])
> detach("package:nlme")
> library(lme4)

Comparisons

> fixefmat <- cbind(rbind(glm = coef(mod1), glmmPQL = pqlfix, glmer.Laplace = fixef(mod2), 
+     glmmML.Laplace = coef(mod3), glmmADMB.Laplace = mod4$b, glmer.AGQ = fixef(mod2B), 
+     glmmML.AGQ = coef(mod3B)))
> sdvec <- c(0, pqlran, attr(VarCorr(mod2)[[1]], "stddev"), mod3$sigma, 
+     NA, attr(VarCorr(mod2B)[[1]], "stddev"), mod3B$sigma)
> (compmat <- round(cbind(fixefmat, sdran = sdvec), 3))
                 (Intercept)   ttt1   ttt2  ttt3 sdran
glm                    2.197 -2.062 -0.102 0.168 0.000
glmmPQL                3.425 -3.241 -0.203 0.329 2.154
glmer.Laplace          5.096 -4.624 -0.294 0.487 3.437
glmmML.Laplace         5.098 -4.625 -0.294 0.487 3.438
glmmADMB.Laplace       5.096 -4.624 -0.294 0.488    NA
glmer.AGQ              5.049 -4.593 -0.295 0.489 3.401
glmmML.AGQ             5.017 -4.554 -0.307 0.497 3.511

Conclusions: ignoring the random effect underestimates all of the fixed effect sizes (this is more or less as expected); PQL underestimates the standard deviation of the random effect by about 50% relative to Laplace/AGQ. Laplace and AGQ give very similar answers: the Laplace results are identical across all 3 packages tried (I couldn't figure out how to extract the random-effects $\sigma$ for glmmADMB). AGQ makes a small difference. glmer and glmmML agree that the effect sizes are very slightly reduced, but disagree on the direction of the random-effects $\sigma$ (however, I wouldn't say that any of these differences are large enough to concern us in practice in this case).

Inference

Wald $Z$ tests

These are given (for fixed effects) by both packages as Pr(>|z|) in the standard model print-out (they're missing for glmmML/Laplace because there was a computational problem) and suggest that there is an effect of having any symbionts at all (ttt1), but that crabs are not different from shrimp (ttt2) and that having two symbionts isn't any better than having one (ttt3). Replicating the results for the glmmML code here (this appears in summary(mod3B), but it doesn't look like this function call returns anything useful —- it just prints the object in the usual way).

> z.table <- with(mod3B, cbind(coefficients, coef.sd, coefficients/coef.sd, 
+     pnorm(abs(coefficients/coef.sd), lower.tail = FALSE) * 2))
> colnames(z.table) <- c("coef", "se(coef)", "t", "Pr(>|Z|)")
> round(z.table, 3)
              coef se(coef)      t Pr(>|Z|)
(Intercept)  5.017    1.814  2.765    0.006
ttt1        -4.554    1.432 -3.180    0.001
ttt2        -0.307    0.562 -0.547    0.585
ttt3         0.497    0.346  1.438    0.150

Or from the glmer fit:
> round(summary(mod2B)@coefs, 4)
            Estimate Std. Error z value Pr(>|z|)
(Intercept)   5.0494     1.5858  3.1842   0.0015
ttt1         -4.5930     1.3019 -3.5279   0.0004
ttt2         -0.2949     0.5642 -0.5227   0.6012
ttt3          0.4888     0.3269  1.4954   0.1348

The main problems with this analysis are (1) it doesn't take account of the residual degrees of freedom/uncertainty in standard error estimates (assumes df is large) and (2) it's a fairly crude estimate —- not clear whether the null distribution of the parameter estimates divided by their standard errors is really normal $Z$ (or $t$) in this case (3) it tests parameters (contrasts) one at a time. glmmML and glmer agree qualitatively (looking at AGQ estimates, since the Laplace estimates via glmmML are missing standard error estimates), although the standard error estimates differ slightly.

Wald $t$ tests

One of the really hard parts of GLMMs (in the traditional Wald- or $F$-testing approach) is guessing (?) the appropriate degrees of freedom. In this case, being extremely conservative, we could suppose that there are a total of 10 df (blocks), minus 4 for the fixed effect coefficients = 6 df. None of the tests we did cross the magic $p<0.05$ line in either direction under these circumstances (results are similar for glmer):

> t.table <- with(mod3B, cbind(coefficients, coef.sd, coefficients/coef.sd, 
+     pt(abs(coefficients/coef.sd), lower.tail = FALSE, df = 6) * 
+         2))
> colnames(t.table) <- c("coef", "se(coef)", "t", "Pr(>|t|)")
> round(t.table, 3)
              coef se(coef)      t Pr(>|t|)
(Intercept)  5.017    1.814  2.765    0.033
ttt1        -4.554    1.432 -3.180    0.019
ttt2        -0.307    0.562 -0.547    0.604
ttt3         0.497    0.346  1.438    0.200

This is of course not using any of the fancy technology (which Doug Bates distrusts) for computing corrected degrees of freedom (Satterthwaite or Kenward-Roger).

LRT

It is possible, but strongly advised against, to use Likelihood Ratio tests to make inferences about fixed effects (Pinheiro and Bates 2000); it might be the recommended way to make inferences about random effects. The problem with testing the random effects here is that glmer can't fit models with no random effects at all. In principle we can compare the log-likelihood from glmer (with block effects) to that from glm (without block effects). In practice we have to be very careful when comparing log-likelihoods calculated by different functions, because they can incorporate (or drop) different additive constants (which makes no difference when we compare results computed by the same function, but can screw us up when we compare results computed by different functions). Ways to check/test this: (1) look at the code for both functions (ugh); (2) simulate data with a very small or zero block effect, fit it with glmer and glm (presumably getting a very small/zero estimate of the random effect and hence similar log-likelihoods), and compare; (3) set the variance parameter in the glmer fit to zero and re-evaluate (this is what I do below). Using the varprof function (too ugly to show here —- defined in glmmfuns.R) which computes a deviance profile by varying the value of the random-effects standard deviation, we can show the profile.

> profres <- varprof(mod2B)

In the figure below, the solid line is the critical level for the LRT based on a $\chi^2$ distribution with 1 df, the dashed line is for $\chi^2_{1/2}$ (see "boundary effects" in the GLMM paper for why $\chi^2_{1/2}$ may be more appropriate). The $p$ value based on this distribution is (very small, temporarily not calculated for boring technical reasons: more comparisons below).
glmm_Stier-lmerprofileplot.png
The deviance calculated by glmer for $\sigma=0$ is 111.05; the deviance calculated by glm is 94.97, so the computed log-likelihoods do not match - we were right to be careful. (That, or I made some other mistake here.) The 95% profile confidence limits on the random-effects sd, based on $\chi^2_{1/2}$ (see below for why I think this is the best answer) are {2.02,5.96}(see ugly invisible code above, taken out of one of the mle profiling functions, for how to get these numbers). We can also get likelihood profiles for the fixed effects. The basic recipe is to leave one fixed effect parameter (the one being profiled) out of the model; use the offset argument to glmer to set it to a particular (fixed) value; and re-fit the model. For factors this is a little bit trickier because it's hard to leave one parameter out of the model. The way I ended up doing this was a bit of a hack - treat the model as a regression analysis on the individual columns of the model matrix (it would be easier if one could pass a model matrix directly). Right now the code is fairly ugly and fairly specific to this problem, it could certainly be cleaned up and generalized …

Plots of the deviance profiles for each fixed-effect parameter. Red lines are quadratic approximations based on the local curvature, as used in the Wald $Z$ tests; horizontal lines are the LRT cutoff —- for quadratic approximations, equivalent to the Wald $Z$ 95% critical value (which as suggested above may not be a reliable indicator in this case).

> vals <- fixprof(mod2B)
> op <- par(mfrow = c(2, 2), mgp = c(2, 1, 0), mar = c(3, 3, 1, 
+     1))
> mindev <- (-2 * logLik(mod2B))
> for (i in 1:4) {
+     with(vals[[i]], {
+         plot(parval, ML, xlab = names(vals)[i], ylab = "deviance")
+         curve(((x - fixef(mod2B)[i])/fixefsd(mod2B)[i])^2 + mindev, 
+             add = TRUE, col = 2)
+         abline(h = mindev + qchisq(0.95, 1))
+     })
+ }
> par(op)

glmm_Stier-fixefprofplot.png

AIC

(Section not complete.)

> logLik.glmmML <- function(x) {
+     loglik <- (-x$deviance)/2
+     attr(loglik, "df") <- length(coef(x))
+     loglik
+ }
> AIC.glmmML <- function(x) x$aic

Both glmmML and glmer fits will give AIC values, although it is a bit tricky to get them:
> library(stats4)
> sapply(list(mod2, mod3, mod2B, mod3B), AIC)
      ML                ML          
70.70584 70.70584 74.82970 70.30988 
> sapply(list(mod2, mod3, mod2B, mod3B), logLik)
       ML                  ML           
-30.35292 -30.35292 -32.41485 -30.15494

Same issues about comparing log-likelihoods from different functions apply here, too, but with additional complication - parameter counting (as well as likelihood constants) can differ among functions, e.g. whether one includes an implicitly estimated variance term or not. So you have to be careful. Once you have the log-likelihoods, though, it is fairly easy to figure out how a given function is counting - easier than working out the log-likelihood comparisons (see above).

Randomization/simulation approaches

The idea here is to simulate from the null model (zero block effect, or zero for some treatment effect or combination of effects), refit the full model, get some summary statistics - the log-likelihood or deviance (or Z or t statistic) - and repeat. For many (1000 or so) simulations these statistics will give the null distribution, which you can then compare against the observed values. In general, this is fairly simple (not necessarily even worth packing into/hiding behind a function): fit the reduced model; use simulate to generate new realizations from the reduced model; for each realization, use refit to refit the full model and (possibly) the reduced model to the data simulated from the reduced model; and compute some sort of summary statistics (deviance/log-likelihood difference, $Z/t$ statistic, ?).

Random effects

For dropping the only random effect from the model we can do almost the same thing, but we have a little problem - we need to simulate from a glm fit instead of a reduced glmer fit, because glmer can't fit models with no random effects. (In this case I had to write simulate.glm since it didn't exist: it's in glmmprof.R.) As promised, the code is actually pretty simple. The extra complexities are:

  • a little bit of code to load previously computed results from a file/save the results to a file (since this is computationally intensive);
  • because glm computes likelihood differently from glmer, we use the zerodev function (defined above) to return the value of the deviance when a model is refitted with the random-effects standard deviation forced equal to zero —- rather than what we would usually do, refitting a glm model to the new realization and computing the deviance that way
> if (file.exists("glmersim1.RData")) {
+     load("glmersim1.RData")
+ } else {
+     svals <- simulate(mod1, nsim = 1000)
+     t0 <- system.time(sim1 <- apply(svals, 2, function(x) {
+         r <- refit(mod2, x)
+         zerodev(r) - deviance(r)
+     }))
+     save("svals", "t0", "sim1", file = "glmersim1.RData")
+ }

(this took 26 seconds on my laptop). The following figure compares the $\chi^2_1$ distribution with the actual distribution of deviance differences:
glmm_Stier-plotqq1.png
The next figure compares the observed null distribution to the $\chi^2_{1/2}$ distribution. (The quantiles of this distribution are equivalent to $\chi^2_1(1-2*\alpha)$, e.g. the 95% quantile is equal to the 90% quantile for $\chi^2_1$. The qchibarsq function from the emdbook package is a convenience function that implements this distribution.) The solid line, which looks really bad, intersects the (0.25,0.75) quantiles - this distribution is extremely skewed, so most of the points actually fall in the lower left-hand corner. The dashed line (admittedly cherry-picked) intersects the (0.25,0.95) quantiles and seems OK (this varies by realization - it looked better for a previous set of simulation results \ldots)
glmm_Stier-plotqqhalf.png
The critical values/quantiles from the simulated and the $\chi^2_{1/2}$ distribution:
> cbind(sim = quantile(sim1, c(0.9, 0.95, 0.99)), qhalf = qchibarsq(c(0.9, 
+     0.95, 0.99)))
         sim    qhalf
90% 1.121128 1.642374
95% 2.157363 2.705543
99% 5.454854 5.411894
> obsdev <- c(zerodev(mod2B) - (-2 * logLik(mod2B)))

Estimated $p$ values for the random effect from simulated and $\chi^2_{1/2}$ (both very very small):
> pchibarsq(obsdev, lower.tail = FALSE)
          ML 
5.295354e-12 
> sum(sim1 >= obsdev)/length(sim1)
[1] 0

or at any rate, less than 1/1001 (number of simulations observed) …

Fixed effects

The simplest thing to do is to explore the overall effect of the treatment.

> ## fit reduced model
> ## in progress: hacking to try to figure
> ##   out what happens in funny cases (false convergence etc.);
> ##   are these detectable from the model fit object, or
> ##   do we have to promote/trap errors to find out when
> ##   it happens?
> 
> mod6B <- glmer(predation~(1|block),family=binomial,
+ nAGQ=8,data=x)
> if (file.exists("glmersim2.RData")) {
+   load("glmersim2.RData")
+ } else {
+   svals <- my.mer.sim(mod6B,nsim=1000)
+   options(warn=2)
+   options(error=recover)
+   i <- 0 
+   t1 <- system.time(sim2 <- apply(svals,2,
+                                   function(z) {
+                                     i <<- i+1
+                                     r0 <- refit(mod6B,z)
+                                     r1 <- refit(mod2B,z)
+                                     c(deviance(r0)-deviance(r1))
+                                   }))
+   save("t1","sim2",file="glmersim2.RData")
+ }

(not finished: this works, but there are a lot of outlier (negative and large-positive values). These probably correspond to convergence problems (I also get warnings about false convergence), but I'm not quite sure how to filter them.)

To do

  • finish example for simulation testing of a fixed effect? - also possibilities of saving other summary statistics esp. ($t$/$Z$ statistic). Canned simulate_null method?
  • more AIC stuff, model averaging?
  • incorporate profile confidence limits (backspline) in varprof code? profiling code could be tested and polished quite a bit: (1) better ways to manipulate formulas and model matrices; (2) test on other models, more robust, work out what to do about zero-intercept models, binomial responses with multiple columns, etc. (3) make profile objects match structure of glm/mle profile objects, so we can use the code for plotting, confidence intervals etc.
  • structured bootstrapping (by block) example? (Can easily bootstrap on blocks: however, this will be a little slower than resampling just the response vector; we will need to refit every time. Sampling design will be such that we will have unbalanced data - fewer blocks, some blocks overrepresented.) Start by splitting data by block, sample blocks with replacement, replicate as necessary, squash back together, fit \ldots
  • ASReML, SAS results ??? (MLWin, Stata, ???)
  • make glmmBUGS work?
  • plot proportions/results from raw data? Hard to show results by block —- only 2 binary points per ttt:block combination.
  • can trace of hat matrix be resurrected??
  • work with simulated data of a similar size?
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License