1 / 41

Lecture 7 Model Checking for Linear Mixed Models for Longitudinal Data

Lecture 7 Model Checking for Linear Mixed Models for Longitudinal Data. Ziad Taib Biostatistics, AZ MV, CTH May 2011. Outline of lecture 7. Model checking for the linear model Model checking for the linear mixed models for longitudinal data. Select. Summarize. Model Class. Some Models.

yeva
Download Presentation

Lecture 7 Model Checking for Linear Mixed Models for Longitudinal Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 7Model Checking for Linear Mixed Models for Longitudinal Data Ziad Taib Biostatistics, AZ MV, CTH May 2011 Name, department

  2. Outline of lecture 7 • Model checking for the linear model • Model checking for the linear mixed models for longitudinal data Name, department

  3. Select Summarize Model Class Some Models Conclusions Data Stop 1. Introductio to model checking • The process of statistical analysis might take the form

  4. In the above process, however, even after a careful selection of model class, the data themselves may indicate that the particular model is unsuitable. Thus, it seems to be reasonable to introduce model checking to the original process. The new process of statistical analysis is Select Model Class Some Models Conclusions Summarize Data Stop Model Checking

  5. A statistical model, whether of the fixed-effects or mixed-effects variety, represents how you think your data were generated. Following model specification and estimation, it is of interest to explore the model-data agreement by raising questions such as • Does the model-data agreement support the model assumptions? • Should model components be refined, and if so, which components? For example, should regressors be added or removed, and is the covariation of the observations modeled properly? • Are the results sensitive to model and/or data? Are individual data points or groups of cases particularly influential on the analysis?

  6. In classical linear models, this examination of model-data agreement has traditionally revolved around • the informal, graphical examination of estimates of model errors to assess the quality of distributional assumptions: residual analysis • overall measures of goodness-of-fit • the quantitative assessment of the inter-relationship of model components; for example, collinearity diagnostics • the qualitative and quantitative assessment of influence of cases on the analysis: influence analysis.

  7. The inadequacy indicated by model checking could thus take two forms and is part of the technique of model checking. • The detection of systematic discrepancies. It may be e.g. that the data as a whole show some systematic departure from the fitted model. An example of this type is informal checking using residuals. • The detection of isolated discrepancies. It may be that a few data values are discrepant from the rest. This can be done using measures of leverage or measures of influence

  8. 2. The linear model Model checking for linear models uses mainly the following statistics: • The fitted values: • The mean residual sum of square: • The residual:

  9. Residual checking • Plot residuals against mean

  10. 3. Model checking in linear mixed models

  11. Influence diagnostics Linear models for uncorrelated data have well established measures to assess the influence of one or more observations on the analysis. For such models, closed-form update expressions allow efficient computations without refitting the model. When similar notions of statistical influence are applied to mixed models, things are more complicated. Removing data points affects fixed effects and covariance parameter estimates. Update formulas for “leave-one-out” estimates typically fail to account for changes in covariance parameters. Moreover, in repeated measures or longitudinal studies, one is often interested in multivariate influence, rather than the impact of isolated points.

  12. Checks for Isolated Departures from the Model • Cook’s distance can be used to assess the influence of observation i, by considering the parameter estimate without the contribution from the i’th observation:

  13. 3. Model checking in linear mixed models

  14. The PROC MIXED uses three fit criteria: • -2 times the residual log-likelihood (-2RLL), • Akaike’s Information Criterion (AIC) (Akaike, 1974) or its corrected version for finite samples (AICC) (Hurvich & Tsai, 1989), • Bayesian Information Criterion (BIC) (Schwarz, 1978). • These criteria are indices of relative goodness-of-fit and may be used to compare models with different covariance structures and the same fixed effects (Bozdogan, 1987; Keselman, Algina, Kowalchuk, & Wolfinger, 1998; Littell et al., 1996; Wolfinger, 1993, 1996, 1997).

  15. 3.1 Model selection: likelihood • When choosing between different models we want to be able to decide which model fits our data best. If the models compared are nested within each other it is possible to do a likelihood ratio test where the test statistic has an approximate distribution. The test statistic for the likelihood statistic is, • where DF are the degrees of freedom defined as the difference in number of parameters for the models and L1 and L2 are the likelihoods for the first and second model respectively. Name, department

  16. If the two models compared are not nested within each other but contain the same number of parameters they can be compared directly by looking at the log likelihood and the model with the biggest likelihood value wins • If the two models are not nested and contain different number of parameters the likelihood cannot be used directly. It is still possible to compare these models with some of the methods described below. • The bigger the likelihood is the better the model fits data and we use this when we compare different models • Since we are interested in getting as simple models as possible we also have to consider the number of parameters in the structures. A model with many parameters usually fits data better than a model with less number of parameters Information. Name, department

  17. 3.2 Model selection: Information criteria • It is possible to compute so called information criteria and there are different ways to do that and here we show two of these, Akaikes information criteria (AIC) and Bayesian information criteria (BIC). The idea with both of these is to punish models with many parameters in some way. We present the information criteria the way they are computed in SAS. • The AIC value is computed as below where q is the number of parameters in the covariance structure. Formulated this way, a smaller value of AIC indicates a better model. • The BIC value is computed using the following formula where q is the number of parameters in the covariance structure and n is the number of effective observations, which means the number of individuals. Like for AIC a smaller value of BIC is better than a larger. Name, department

  18. AGE 1 AGE 2 AGE 3 AGE 4 AGE 5 0.8938 0.8921 1.0098 1.2798 1.6207 0.7632 1.5286 1.4229 1.7209 2.1504 0.7163 0.7718 2.2236 2.8210 3.3660 0.6600 0.6790 0.9229 4.2017 5.2374 0.6106 0.6195 0.8040 0.9100 7.8834 Note: Variances on diagonal, covariances above diagonal, correlations below diagonal

  19. Structure CS AR(1) UN Cov Par 2 2 15 -2RLL 2438.60 2150.40 1857.50 AIC 2442.60 2154.40 1887.50 BIC 2448.50 2160.30 1931.60 Chi-Square 387.94 676.17 969.12 Pr>Chi-Square <.0001 <.0001 <.0001

  20. Model fit • It is possible to define a goodness of fit measure similar to, R, the coefficient of determination often used for linear regression. It is called Concordance Correlation coefficient (CCC). Unlike the AIC or the BIC, the CCC does not compare the model at hand to other models, thus it does not require that other models be fitted. • For simple linear regression we have

  21. 3.3 Residuals for linear mixed models • In model selection, we accept the model with the best likelihood value in relation to the number of parameters but we still do not know if the model chosen is a good model or even if the normality assumption we have made is realistic.To check this we can look at two types of plots for our data, • normal plots • residual plots to check • normality of the residuals and the random effects • if the residuals seem to have a constant variance • outliers Name, department

  22. The predicted values and residuals can be computed in many different ways. Some of these are accounted for in the in what following. • Recall that the general linear mixed model is of the form: • Assuming we have ML estimates of the fixed parameters and EB predictions of the random parameters Name, department

  23. We can “estimate” the residuals according to the following three methods: • The marginal residual • The conditional residual • The best linear unbiased predictor • Each type of residual is useful to evaluate some of the assumptions of the model. Name, department

  24. Can be used to assess linearity of the response w.r.t explanatory variables. A random behviour around zero is a sign of linearity. Name, department 25 Date

  25. Plots of e/s against Y can be used to assess homogeneity of the variances as well as normality. Name, department 26 Date

  26. Plots of bi against subject indices can be used to find outliers. Plot elements in bi to assess normality and check for outliers. Name, department

  27. 3.4 An example To illustrate the above procedures, we analyze data from a study conducted at the School of Dentistry of the University of Sao Paulo, Brazil, designed to compare a low cost toothbrush (monoblock) with a conventional toothbrush with respect to the maintenance of the capacity to remove bacterial plaque under daily use. The data in the table correspond to bacterial plaque indices obtained from 32 children aged 4 to 6 before and after tooth brushing in four evaluation sessions. Name, department

  28. Following Singer et al. (2004) who analyze a different data set from the same study, we considered fitting models of the form Name, department

  29. (3.1) (3.2) (3.3) Three possible models i subject d session j type of toothbrush pre post

  30. The model reduction procedure can be based on likelihood ratio tests (LRT) and AIC and BIC: The LRT p-values corresponding to the reduction of (3.1) to (3.2) and of (3.2) to (3.3) were, respectively 0.3420 and 0.1623. The AIC (BIC) for the three models are AIC BIC (3.1) 95.0 68.6 (3.2) 102.8 86.7 (3.3) 105.6 92.1 Based on these results, we adopt (3.3) to illustrate the use of the proposed diagnostic procedures.

  31. To check for the linearity of effects, we plot the marginal residuals versus the logarithms of the pretreatment bacterial plaque index in Figure 2. The figure supports the regression model for the transformed response (log of the bacterial plaque index) Figure 2 Name, department

  32. The figure suggests something is wrong with observations #12.2 and #29.4

  33. Name, department

  34. References • Atkinson, C. A. (1985). Plots, transformations, and regression: an introduction to graphical methods of diagnostic regression analysis. Oxford University Press, Oxford. • Cook, R. D. and Weisberg, S. (1982). Residuals and influence regression. Chapman & Hall, New York. • Cox, D. R. and Snell, E. J. (1968). A general definition of residuals (with discussion). Journal Royal Statistical Society B 30, 248–275. • Fei, Y. and Pan, J. (2003). Influence assessments for longitudinal data in linear mixed models. In 18th international workshop on Statistical Modelling. G. Verbeke, G. Molenberghs, M. Aerts and S. Fieuws (eds.). Leuven: Belgium, 143–148. • Grady, J. J. and Helms, R.W. (1995). Model selection techniques for the covariance matrix for incomplete longitudinal data. Statistics in Medicine 14, 1397–1416. Name, department

  35. References 6. Jiang, J. (2001). Goodness-of-fit tests for mixed model diagnostics. The Annals of Statistics 29, 1137–1164. 7. Lange, N. and Ryan, L. (1989). Assessing normality in random effects models. The Annals of Statistics 17, 624– 642. 8. Longford, N. T. (2001). Simulation-based diagnostics in random-coefficient models. Journal of the Royal Statistical Society A 164, 259–273. 9. Nobre, J. S. and Singer, J. M. (2006). Fixed and random effects leverage for influence analysis in linear mixed models. (Submitted; http://www.ime.usp.br/jmsinger). 10. Oman, S. D. (1995). Checking the assumptions in mixed-model analysis of variance: a residual analysis approach. Computational Statistics and Data Analysis 20, 309–330.

  36. References 11. Verbeke, G. and Lesaffre, E. (1997). The effect of misspecifying the random-effects distributions in linear mixed models for longitudinal data. Computational Statistics and Data Analysis 23, 541–556. 12. Waternaux, C., Laird, N. M., and Ware, J. H. (1989). Methods for analysis of longitudinal data: blood-lead concentrations and cognitive development. Journal of the American Statistical Association 84, 33–41. 13. Weiss, R. E. and Lazaro, C. G. (1992). Residual plots for repeated measures. Statistics in Medicine 11, 115–124. 14. Wolfinger, R. (1993). Covariance structure selection in general mixed models. Communications in Statistics-Simulation 22, 1079–1106.

  37. Any Questions? Name, department

More Related