1 / 28

Lecture 5 Model Evaluation

Lecture 5 Model Evaluation. Elements of Model evaluation. Goodness of fit Prediction Error Bias Outliers and patterns in residuals. Assessing Goodness of Fit for Continuous Data. Visual methods Don’t underestimate the power of your eyes, but eyes can deceive, too... Quantification

ferris
Download Presentation

Lecture 5 Model Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 5Model Evaluation

  2. Elements of Model evaluation • Goodness of fit • Prediction Error • Bias • Outliers and patterns in residuals

  3. Assessing Goodness of Fit for Continuous Data • Visual methods • Don’t underestimate the power of your eyes, but eyes can deceive, too... • Quantification • A variety of traditional measures, all with some limitations... A good review... C. D. Schunn and D. Wallach. Evaluating Goodness-of-Fit in Comparison of Models to Data. Source:http://www.lrdc.pitt.edu/schunn/gof/GOF.doc

  4. Visual evaluation for continuous data • Graphing observed vs. predicted...

  5. Examples Goodness of fit of neighborhood models of canopy tree growth for 2 species at Date Creek, BC Observed Predicted Source: Canham, C. D., P. T. LePage, and K. D. Coates. 2004. A neighborhood analysis of canopy tree competition: effects of shading versus crowding. Canadian Journal of Forest Research.

  6. Goodness of Fit vs. Bias 1:1 line

  7. Where expi is the expected value of observation i given the model, and obs is the overall mean of the observations R2 as a measure of goodness of fit • R2 = proportion of variance* explained by the model...(relative to that explained by the simple mean of the data) (Note: R2 is NOT bounded between 0 and 1) * this interpretation of R2 is technically only valid for data where SSE is an appropriate estimate of variance (e.g. normal data)

  8. R2 – when is the mean the mean? • Clark et al. (1998) Ecological Monographs 68:220 For i=1..N observations in j = 1..S sites – uses the SITE means, rather than the overall mean, to calculate R2

  9. r2 as a measure of goodness of fit r2 = squared correlation (r) between observed (x) and predicted (y) NOTE: r and r2 are both bounded between 0 and 1

  10. R2 vs r2 Is this a good fit (r2=0.81) or a really lousy fit (R2=-0.39)? (it’s undoubtedly biased...)

  11. A note about notation... Check the documentation when a package reports “R2” or “r2”. Don’t assume they will be used as I have used them... Sample Excel output using the “trendline” option for a chart: The “R2” value of 0.89 reported by Excel is actually r2 (While R2 is actually 0.21) (If you specify no intercept, Excel reports true R2...)

  12. R2 vs. r2 for goodness of fit • When there is no bias, the two measures will be almost identical (but I prefer R2, in principle). • When there is bias, R2 will be low to negative, but r2 will indicate how good the fit could be after taking the bias into account...

  13. Sensitivity of R2 and r2 to data range

  14. The Tyranny of R2 (and r2) • Limitations of R2 (and r2) as a measure of goodness of fit... • Not an absolute measure (as frequently assumed), • particularly when the variance of the appropriate PDF is NOT independent of the mean (expected) value • i.e. lognormal, gamma, Poisson,

  15. Gamma Distributed Data... The variance of the gamma increases as the square of the mean!...

  16. So, how good is good? • Our assessment is ALWAYS subjective, because of • Complexity of the process being studied • Sources of noise in the data • From an information theory perspective, should you ever expect R2 = 1?

  17. Other Goodness of Fit Issues... • In complex models, a good fit may be due to the overwhelming effect of one variable... • The best-fitting model may not be the most “general” • i.e. the fit can be improved by adding terms that account for unique variability in a specific dataset, but that limit applicability to other datasets. (The curse of ad hoc multiple regression models...)

  18. How good is good: deviance • Comparison of your model to a “full” model, given the probability model. For i = 1..n observations, a vector X of observed data (xi), and a vector q of j = 1..m parameters (qj): Define a “full” model with n parameters qi = xi (qfull). Then: Nelder and Wedderburn (1972)

  19. Deviance for normally-distributed data Log-likelihood of the full model is a function of both sample size (n) and variance (s2) Therefore – deviance is NOT an absolute measure of goodness of fit... But, it does establish a standard of comparison (the full model), given your sample size and your estimate of the underlying variance...

  20. Forms of Bias Proportional bias (slope not = 1) Systematic bias (intercept not = 0)

  21. “Learn from your mistakes”(Examine your residuals...) • Residual = observed – predicted • Basic questions to ask of your residuals: • Do they fit the PDF? • Are they correlated with factors that aren’t in the model (but maybe should be?) • Do some subsets of your data fit better than others?

  22. Using Residuals to Calculate Prediction Error • RMSE: (Root mean squared error) (i.e. the standard deviation of the residuals)

  23. Predicting lake chemistry from spatially-explicit watershed data • At steady state: Where concentration, lake volume and flushing rate are observed, And input and inlake decay are estimated

  24. Predicting iron concentrations in Adirondack lakes Results from a spatially-explicit, mass-balance model of the effects of watershed composition on lake chemistry Source: Maranger et al. (2006)

  25. Should we incorporate lake depth? • Shallow lakes are more unpredictable than deeper lakes • The model consistently underestimates Fe concentrations in deeper lakes

  26. Adding lake depth improves the model... R2 went from 56% to 65% It is just as important that it made sense to add depth...

  27. But shallow lakes are still a problem...

  28. Summary – Model Evaluation • There are no silver bullets... • The issues are even muddier for categorical data... • An increase in goodness of fit does not necessarily result in an increase in knowledge… • Increasing goodness of fit reduces uncertainty in the predictions of the models, but this costs money (more and better data). How much are you willing to spend? • The “signal to noise” issue: if you can see the signal through the noise, how far are you willing to go to reduce the noise?

More Related