TY - JOUR
T1 - Joint Selection in Mixed Models using Regularized PQL
AU - Hui, Francis K.C.
AU - Müller, Samuel
AU - Welsh, A. H.
N1 - Publisher Copyright:
© 2017 American Statistical Association.
PY - 2017/7/3
Y1 - 2017/7/3
N2 - The application of generalized linear mixed models presents some major challenges for both estimation, due to the intractable marginal likelihood, and model selection, as we usually want to jointly select over both fixed and random effects. We propose to overcome these challenges by combining penalized quasi-likelihood (PQL) estimation with sparsity inducing penalties on the fixed and random coefficients. The resulting approach, referred to as regularized PQL, is a computationally efficient method for performing joint selection in mixed models. A key aspect of regularized PQL involves the use of a group based penalty for the random effects: sparsity is induced such that all the coefficients for a random effect are shrunk to zero simultaneously, which in turn leads to the random effect being removed from the model. Despite being a quasi-likelihood approach, we show that regularized PQL is selection consistent, that is, it asymptotically selects the true set of fixed and random effects, in the setting where the cluster size grows with the number of clusters. Furthermore, we propose an information criterion for choosing the single tuning parameter and show that it facilitates selection consistency. Simulations demonstrate regularized PQL outperforms several currently employed methods for joint selection even if the cluster size is small compared to the number of clusters, while also offering dramatic reductions in computation time. Supplementary materials for this article are available online.
AB - The application of generalized linear mixed models presents some major challenges for both estimation, due to the intractable marginal likelihood, and model selection, as we usually want to jointly select over both fixed and random effects. We propose to overcome these challenges by combining penalized quasi-likelihood (PQL) estimation with sparsity inducing penalties on the fixed and random coefficients. The resulting approach, referred to as regularized PQL, is a computationally efficient method for performing joint selection in mixed models. A key aspect of regularized PQL involves the use of a group based penalty for the random effects: sparsity is induced such that all the coefficients for a random effect are shrunk to zero simultaneously, which in turn leads to the random effect being removed from the model. Despite being a quasi-likelihood approach, we show that regularized PQL is selection consistent, that is, it asymptotically selects the true set of fixed and random effects, in the setting where the cluster size grows with the number of clusters. Furthermore, we propose an information criterion for choosing the single tuning parameter and show that it facilitates selection consistency. Simulations demonstrate regularized PQL outperforms several currently employed methods for joint selection even if the cluster size is small compared to the number of clusters, while also offering dramatic reductions in computation time. Supplementary materials for this article are available online.
KW - Fixed effects
KW - Generalized linear mixed models
KW - Lasso
KW - Penalized likelihood
KW - Quasi-likelihood
KW - Variable selection
UR - http://www.scopus.com/inward/record.url?scp=85020722427&partnerID=8YFLogxK
U2 - 10.1080/01621459.2016.1215989
DO - 10.1080/01621459.2016.1215989
M3 - Article
SN - 0162-1459
VL - 112
SP - 1323
EP - 1333
JO - Journal of the American Statistical Association
JF - Journal of the American Statistical Association
IS - 519
ER -