Monday, August 27, 2018

Zero-Inflated Poisson and Negative Binomial Models with GLMMadaptive

Clustered/Grouped Count Data

Often cluster/grouped count data exhibit extra zeros and over-dispersion. To account for these features, Poisson and negative binomial mixed effects models with an extra zero-inflation part are used. These models entail a logistic regression model for the extra zeros, and a Poisson or negative binomial model for the remaining zeros and the positive counts. In both models, random effects are included to account for the correlations in the repeated measurements.

Estimation under maximum likelihood is challenging due to the high dimension of the random effects vector. In this post, we will illustrate how to estimate the parameters of such models using the package GLMMadaptive that uses the adaptive Gaussian quadrature rule to approximate the integrals over the random effects. The function in the package that fits these models is mixed_model(). The user defines the type of model using the family argument. Arguments fixed and random specify the R formulas for the fixed- and random-effects parts of the model for the remaining zeros and the positive counts, and arguments zi_fixed and zi_random specify the formulas for the fixed- and random-effects parts for the extra zeros part.

Simulate Zero-Inflated Negative Binomial Data

To illustrate the use of function mixed_model() to fit these models, we start by simulating longitudinal data from a zero-inflated negative binomial distribution:



Zero-Inflated Poisson Mixed Effects Model

A zero-inflated Poisson mixed model with only fixed effects in the zero part is fitted with the following call to mixed_model() that specifies the zi.poisson() family object in the family argument:


 
Only the log link is currently available for the non-zero part and the logit link for the zero part. Hence, the estimated fixed effects for the two parts are interpreted accordingly. We extend fm1 by also allowing for random intercepts in the zero part. We should note that by default the random intercept of the non-zero part is correlated with the random intercept from the zero part:



We test if we need the extra random effect using a likelihood ratio test:





Zero-Inflated Negative Binomial Mixed Effects Model

We continue with the same data, but we now take into account the potential over-dispersion in the data using a zero-inflated negative binomial model. To fit this mixed model we use an almost identical syntax to what we just did above - the only difference is that we now specify as family the zi.negative.binomial() object:



Similarly to fm1, in gm1 we specified only fixed effects for the logistic regression for the zero part. We now compare this model with the zero-inflated Poisson model that allowed for a random intercept in the zero part. The comparison can be done with the anova() method; because the two models are not nested, we set test = FALSE in the call to anova(), i.e.:



We observe that accounting for the over-dispersion seems to better improve the fit than including the random intercepts term in the zero part.

Thursday, June 14, 2018

Mixed Models with Adaptive Gaussian Quadrature

Overview

In this post, I would like to introduce my new R package GLMMadaptive for fitting mixed-effects models for non-Gaussian grouped/clustered outcomes using marginal maximum likelihood.

Admittedly, there is a number of packages available for fitting similar models, e.g., lme4, glmmsr, glmmTMB, glmmEP, and glmmML among others; more information on other available packages can also be found in GLMM-FAQ. GLMMadaptive differs from these packages in approximating the integrals over the random effects in the definition of the marginal log-likelihood function using an adaptive Gaussian quadrature rule, while allowing for multiple correlated random effects.

An Example: Mixed Effects Logistic Regression

We illustrate the use of the package in the simple case of a mixed effects logistic regression. We start by simulating some data:



We continue by fitting the mixed effects logistic regression for the longitudinal outcome y assuming random intercepts for the random-effects part. The primary model-fitting function in the package is the mixed_model(), and has four required arguments, namely,
  • fixed: a formula for the fixed effects,  
  • random: a formula for the random effects,  
  • family: a family object specifying the type of response variable, and  
  • data: a data frame containing the variables in the previously mentioned formulas.
 Assuming that the package has been loaded using library("GLMMadaptive"), the call to fit the random intercepts logistic regression is:



By default, 11 quadrature points are used, but this can be adjusted using the nAGQ control argument. We extend model fm1 by also including a random slopes term; however, we assume that the covariance between the random intercepts and random slopes is zero. This is achieved by using the `||` symbol in the specification of the random argument. We fit the new model and compare it with fm1 using the anova() function that performs a likelihood ratio test:



 We further extend the model by estimating the covariance between the random intercepts and random slopes, and we use 15 quadrature points for the numerical integration:



Capabilities and Further Reading

The package offers a wide range of methods for standard generic functions in R applicable to regression models objects and mixed models objects in particular (e.g., fixef(), ranef(), etc.); more information can be found in the Methods for MixMod Objects vignette. In addition, some highlights of its capabilities:
  • It allows for user-defined family objects implementing not standardly available outcome distributions; more information can be found in the Custom Models vignette.
  • It can calculate fixed effects coefficients with a marginal / population averaged interpretation using the function marginal_coefs().
  • Function effectPlotData() calculates predictions and confidence intervals for constructing effect plots.
The development version of the package is available on the GitHub repository.

Monday, October 24, 2016

Multivariate Joint Models for Multiple Longitudinal Outcomes and a Time-to-Event

A new, development version, of package JMbayes has been rolled-out on the dedicated GitHub repo. The major addition in this version is a set of new functions that can fit multivariate joint models for multiple longitudinal outcomes and a time-to-event. As with other GitHub packages, it can be easily installed using the install_github() function from package devtools. The new function that fits multivariate joint models is called mvJointModelBayes() and has a very similar syntax as the jointModelBayes() function described in a previous post. However, contrary to jointModelBayes() that is entirely written in R, the main bulk of computations of mvJointModelBayes() are based on C++ code building upon the excellent Rcpp and RcppArmadillo packages.

To make the connection between the two functions from a practical viewpoint more clear, let's fit first a univariate joint model using mvJointModelBayes(). As for jointModelBayes(), we first need to separately fit a mixed-effects model for the longitudinal outcome, and a Cox model for the survival outcome. However, contrary to jointModelBayes(), which required the mixed model to be fitted with function lme() from the nlme package, for mvJointModelBayes() the mixed model needs to be fitted with the new function mvlgmer(). This follows the syntax of lmer() from package lme4. For example, the code below fits a linear mixed model for the longitudinal biomarker serum bilirubin from the Mayo Clinic PBC data set:



The main arguments of this function are: formulas a list of lme4-like formulas (a formula per outcome), data a data.frame that contains all the variables specified in formulas (NAs allowed), and families a list of family objects specifying the type of each outcome (currently the gaussian, binomial and poisson families are allowed). Following, we fit a Cox model using function coxph() from the survival package; argument model of coxph() needs to be set to TRUE:


Then the univariate joint model is fitted by the following simple call to mvJointModelBayes():


To fit a multivariate joint model for multiple longitudinal outcomes, the only thing that needs to be adapted from the procedure described above, is the call to mvglmer(). For example, to fit a bivariate joint model for serum bilirubin and presence of spiders, we use the syntax:


In addition, similarly to jointModelBayes(), mvJointModelBayes() allows to specify different functional forms for the longitudinal outcomes that are included in the Cox model.  As an example, we extend model JMFit2 by including the value and slope term for bilirubin, and the area term for spiders (in the log-odds scale). To include these terms into the model we specify the Formulas argument. This is specified in a similar manner as the derivForms argument of jointModelBayes(). In particular, it should be a list of lists. Each component of the outer list should have as name the name of the corresponding outcome variable. Then in the inner list we need to specify four components, namely, the fixed & random R formulas specifying the fixed and random effects part of the term to be included, and indFixed & indRandom integer indices specifying which of the original fixed and random effects are involved in the calculation of the new term. In the inner list you can also optionally specify a name for the term you want to include. A couple of notes:
  1. For terms not specified in the Formulas list, the default value functional form is used.
  2. If for a particular outcome you want to include both the value functional form and an extra term, then you need to specify that in the Formulas using two entries. To include the value functional form you only need to set the corresponding to "value", and for the second term to specify the inner list. See example below on how to include the value and slope for serum bilirubin (for example, if the list below the entry "log(serBilir)" = "value" was not given, then only the slope term would have been included in the survival submodel).

We further extend the previous model by including the interactions terms between the terms specified in Formulas and the randomized treatment indicator drug. The names specified in the list that defined the interaction factors should match with the names of the output from JMFit3. Because of the many association parameters we have, we place a shrinkage prior on the association coefficients 'alpha'. In particular, if we have K association parameters, we assume that alpha_k ~ N(0, tau * phi_k), k = 1, ..., K. The precision parameters tau and phi_k are given Gamma priors. Parameter tau is global shrinkage parameter, and phi_k a specific shrinkage parameter per alpha coefficient:



Future updates of the package will include additional features, including among others
  • a method for the generic functions survfitJM() and predict() to compute dynamic predictions for multivariate longitudinal data setting for both the longitudinal and survival outcomes.
  • allow for competing risks.
  • make the C++ code (even) faster.

Friday, March 18, 2016

An Integrated Shiny App for a Course on Repeated Measurements Analysis (completed)

Repeated Measurements Analysis

Repeated measurements analysis, and in particular longitudinal data analysis, is one of the two most frequently used types of analysis in my field (Biostatistics) - the other being survival analysis. Starting from this year I will be teaching in my university a new course on regression model for repeated measurements data that is primarily focused on applied statisticians, epidemiologists and clinicians. In general, this type of audience often finds this topic quite difficult, mainly due to the fact that one has to carefully consider the two levels of such data, namely, how to model longitudinal evolutions, and how to model correlations. On top of that, many of the researchers following this course have been primarily exposed to SPSS, making the transition to R that I will be using in the course somewhat more demanding.

Shiny app

Based on the considerations mentioned above, when I was developing the course I was thinking of ways to facilitate both the understanding of the key concepts of repeated measurements analysis, and how to effectively explain the use of R to analyze such data. The answer to both questions was to utilize the great capabilities of shiny. I have created an app the replays all analyses done in the course - a snapshot shown below.



The students can select a chapter and a section, see the code used in that section in the 'Code' tab, and examine the output in the 'Output' tab. The slides of the course are also integrated in the app, and can been seen in the 'Slides' tab. The 'Help' tab explains the basic usage of the main functions used in the selected chapter. To further enhance understanding of some key concepts, such as how random effects capture correlations and how longitudinal evolutions are affected by the levels of baseline covariates, the app allows to interactively change values for some parameters that control these features. The app also includes four practicals aimed at Chapter 2 that introduces marginal models for continuous data, Chapter 3 that explains linear mixed effects models, Chapter 4 the presents the framework of generalized estimating equations, and Chapter 5 the presents generalized linear mixed effects models, respectively. Chapter 6 focuses on explaining the issues with incomplete data in longitudinal studies. For each practical the students may reveal the answer to specific questions they have trouble solving, or download a whole R markdown report with a detailed explanation of the solutions.

The app is based on some popular packages for repeated measurements analysis (nlme, lme4, MCMCglmm, geepack), and some additional utilities packages (lattice, MASS, corrplot).

The app is available in my dedicated GitHub repository for this course, and can be invoked using the command (assuming that you have the aforementioned packages installed):

shiny::runGitHub("Repeated_Measurements", "drizopoulos")



Friday, March 4, 2016

Dynamic Predictions using Joint Models

What are Dynamic Predictions

In this post we will explain the concept of dynamic predictions and illustrate how these can be computed using the framework of joint models for longitudinal and survival data, and the R package JMbayes. The type of dynamic predictions we will discuss here are calculated in follow-up studies in which some sample units (e.g., patients) who are followed-up in time provide a set of longitudinal measurements. These longitudinal measurements are expected to be associated to events that the sample units may experience during follow-up (e.g., death, onset of disease, getting a child, dropout from the study, etc.). In this context, we would like to utilize the longitudinal information we have available up to  particular time point t to predict the risk of an event after t. For example, for a particular patient we would like to use his available blood values up to year 5 to predict the chance that he will develop a disease before year 7 (i.e., within two years from his last available measurement). The dynamic nature of these predictions stems from the fact that each time we obtain a new a longitudinal measurement we can update the prediction we have previously calculated.

Joint models for longitudinal and survival data have been shown to be a valuable tool for obtaining such predictions. They allow to investigate which features of the longitudinal profiles are most predictive, while appropriately accounting for the complex correlations in the longitudinal measurements.

Fit a Joint Model

For this illustration we will be using the Primary Biliary Cirrhosis (PBC) data set collected by the Mayo Clinic from 1974 to 1984. For our analysis we will consider 312 patients who have been randomized to D-penicillamine and placebo. During follow-up several biomarkers associated with PBC have been collected for these patients. Here we focus on serum bilirubin levels, which is considered one of the most important ones associated with disease progression. In package JMbayes the PBC data are available in the data frames pbc2 and pbc2.id containing the longitudinal and survival information, respectively (i.e., the former is in the long format while the latter contains a single row per patient).

We start by fitting a joint model to the PBC data set. For the log-transformed serum bilirubin we use a linear mixed effects models with natural cubic splines in the fixed and random effects for time, and also correct in the fixed part for age and sex. For the time-to-death we use a Cox model with baseline covariates age, sex and their interaction and the underlying level of serum bilirubin as estimated from the mixed model. This joint model is fitted using the following piece of code:




Calculate Dynamic Predictions

In package JMbayes these subject-specific predictions are calculated using function survfitJM(), respectively. As an illustration, we show how this function can be utilized to derive predictions for Patient 2 from the PBC data set using our fitted joint model jointFit. We first extract the data of this patient in a separate data frame and then we call survfitJM()



The last available measurement of this patient was in year 8.83, and survfitJM() will by default produce estimates of event-free probabilities starting from this last time point to the end of the follow-up. The calculation of these probabilities is based on a Monte Carlo procedure, and in the output we obtain as estimates the mean and median over the Monte Carlo samples along with the 95% pointwise credible intervals. Hence, the probability that this patient will survive up to year 11.2 is 60%. A plot of these probabilities can be obtained using the plot() method for objects returned by survfitJM()




Shiny app for Dynamic Predictions

To facilitate the use of dynamic predictions in practice, a web interface has been written using package shiny. This is available in the demo folder of the package and can be invoked with a call to the runDynPred() function. With this interface users may load an R workspace with the fitted joint model(s), load the data of the new subject, and subsequently obtain dynamic estimates of survival probabilities and future longitudinal measurements (i.e., an estimate after each longitudinal measurement). Several additional options are provided to calculate predictions based on different joint models (if the R workspace contains more than one model), to obtain estimates at specific horizon times, and to extract the data set with the estimated conditional survival probabilities. A detailed description of the options of this app is provided in the 'Help' tab within the app.

Friday, October 16, 2015

An Integrated Shiny App for a Course on Repeated Measurements Analysis

I will be teaching a course on statistical regression models for repeated measurements data, and I thought of creating a shiny app to let students run the code used in the course and examine the output in real time.

The app is still in development mode (Chapter 5 is still missing), but you may give it a try, and let me know of any feedback/comments you may have.

More information on installation requirements and how to invoke is available on my GitHub page.

Sunday, August 9, 2015

Two Presentations about Joint Models

Packages JM and JMbayes @ JSM2015

This year JSM features an interesting invited session about fitting joint models in different software packages -- if you're interested drop by...

Here are my slides in which I give a short intro in packages JM and JMbayes:




Personalized Screening Intervals for Longitudinal Biomarkers using Joint Models

This presentation shows a new methodological framework for optimizing screening intervals for longitudinal biomarkers using the framework of joint models for longitudinal and time-to-event data; more information can be found on arXiv.

Here are the slides: