1 / 13

Concentrated Likelihood Functions, and Properties of Maximum Likelihood

Concentrated Likelihood Functions, and Properties of Maximum Likelihood. Lecture XX. Concentrated Likelihood Functions. The more general form of the normal likelihood function can be written as:.

lamya
Download Presentation

Concentrated Likelihood Functions, and Properties of Maximum Likelihood

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concentrated Likelihood Functions, and Properties of Maximum Likelihood Lecture XX

  2. Concentrated Likelihood Functions • The more general form of the normal likelihood function can be written as:

  3. This expression can be solved for the optimal choice of s2 by differentiating with respect to s2:

  4. Substituting this result into the original logarithmic likelihood yields

  5. Intuitively, the maximum likelihood estimate of m is that value that minimizes the mean square error of the estimator. Thus, the least squares estimate of the mean of a normal distribution is the same as the maximum likelihood estimator under the assumption that the sample is independently and identically distributed.

  6. The Normal Equations • If we extend the above discussion to multiple regression, we can derive the normal equations.

  7. Taking the derivative with respect to a0 yields

  8. Taking the derivative with respect to a1 yields

  9. Substituting for a0 yields

  10. Properties of Maximum Likelihood Estimators • Theorem 7.4.1: Let L(X1,X2,…Xn|q) be the likelihood function and let q^(X1,X2,…Xn) be an unbiased estimator of q. Then, under general conditions, we have The right-hand side is known as the Cramer-Rao lower bound (CRLB).

  11. The consistency of maximum likelihood can be shown by applying Khinchine’s Law of Large Numbers to which converges as long as

  12. Asymptotic Normality • Theorem 7.4.3: Let the likelihood function be L(X1,X2,…Xn|q). Then, under general conditions, the maximum likelihood estimator of q is asymptotically distributed as

More Related