1 / 24

Psychology 290 Special Topics Study Course: Advanced Meta-analysis

Psychology 290 Special Topics Study Course: Advanced Meta-analysis. February 5, 2014. Overview. Likelihood equations. Maximum likelihood and fixed-effects meta-analysis. Likelihood-ratio tests. Q -between and maximum likelihood. Review of properties of maximum likelihood estimates.

laksha
Download Presentation

Psychology 290 Special Topics Study Course: Advanced Meta-analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Psychology 290Special Topics Study Course: Advanced Meta-analysis February 5, 2014

  2. Overview • Likelihood equations. • Maximum likelihood and fixed-effects meta-analysis. • Likelihood-ratio tests. • Q-between and maximum likelihood.

  3. Review of properties of maximum likelihood estimates • Maximum likelihood estimates are “minimum variance” estimators. • That means that it is impossible to have an estimator of a parameter that has a more compact sampling distribution than that of the MLE. • That is a good thing.

  4. Review of properties of maximum likelihood estimates • Maximum likelihood estimates are consistent. • Consistent estimators are ones that approach the true value of the parameter as the sample size becomes large. • That’s a good thing.

  5. Review of properties of maximum likelihood estimates • Maximum likelihood estimates are often (but not always) biased. • A biased estimator is one that, on average, misses the true value of the parameter. • Although they are consistent, MLEs often are biased for smaller sample sizes. • That’s a not good thing.

  6. Biased estimators • Even though MLEs are sometimes biased, they are still often the best estimators. • In the following plot, X marks the true value of a parameter being estimated.

  7. Likelihood equations • A basic concept in calculus is the idea of a line that is tangent to a curve. • The slope of such a tangent line is called the derivative of the function that defines the curve.

  8. Likelihood equations (cont.) • When the curve reaches its maximum, the derivative has the value zero. • Note that this would also happen if the function reached a minimum. • However, many likelihood functions (including the ones we will deal with) have only a maximum.

  9. Likelihood equations (cont.) • If we set the derivative of a likelihood function to zero and solve for the parameter, we can often find a closed-form expression for the MLE. • Derivative = 0 is called the “likelihood equation.” • Let’s consider the example of fixed-effects meta-analysis.

  10. Maximum likelihood and fixed-effect meta-analysis • In meta-analysis, we have a vector T of effect sizes, and a vector v of conditional variances. • In fixed-effect meta-analysis, each Ti is assumed to be normally distributed about a common mean qwith variance vi .

  11. The likelihood • This leads to the following likelihood:

  12. The log-likelihood • If we take the log, we get:

  13. The likelihood equation • The derivative of the log-likelihood is • which leads to the following likelihood equation:

  14. Solving the likelihood equation • Apply algebra to solve for q :

  15. The MLE of q • We have just shown that the conventional inverse-variance-weighted mean is the maximum likelihood estimate of the population effect. • (Demonstration in R.)

  16. Likelihood-ratio tests • One very handy property of maximum likelihood is that there is an easy way to test whether anything is lost when a model is simplified. • A model is said to be nested within another model if one can produce the nested model by fixing parameters of the more complex model.

  17. Likelihood-ratio tests (cont.) • Most often, a nested model is reached by fixing some parameters to zero. • Under such circumstances, if the population value is zero, then twice the difference between the log-likelihoods has a chi-square distribution. • The degrees of freedom for the likelihood-ratio chi-square is the number of parameters that were constrained.

  18. An example of likelihood-ratio testing • A study of gender differences in conformity included effect-size estimates from studies with all male authors, and effect-size estimates from studies that had some female authors. • We are interested in testing whether the population effect differs for those two groups of studies. • (Digression in R)

  19. Q-between and maximum likelihood • A common way to perform meta-analysis is to use weighted regression. • Group membership may be indicated by using a dummy variable (0 indicates first group, 1 indicates second group). • The sum of squares that forms the numerator of the F statistic in the regression output is Q-between. • (Example in R.)

  20. Q-between and maximum likelihood (cont.) • We have just demonstrated empirically that the Q-between statistic is a likelihood-ratio chi-square (in the context of fixed-effect meta-analysis).

  21. Next time • Random-effects meta-analysis using maximum likelihood.

More Related