1 / 6

Lecture 4

Lecture 4. The Fisherian Way or Likelihoodists. Pawitan 2001 page 15. Inference is possible directly from the likelihood function . For frequentists this is OK for large samples For Bayesians it is OK given a prior

Download Presentation

Lecture 4

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 4 The Fisherian Way or Likelihoodists

  2. Pawitan 2001 page 15 • Inference is possibledirectly from the likelihoodfunction. • For frequentiststhis is OK for largesamples • For Bayesians it is OK given a prior • If probabilitystatementscan be constructedthenuse it, otherwiselilkelihoodbasedinference • Importantconcepts • Maximum likelihood • Fisher information • Likelihoodbasedconfidence interval • Likelihoodratio • Likelihood principle: Two data sets thatproduce the same (proportional) likelihoodcontain the same evidence (Birnbaum 1962)

  3. Likelihood-based intervals • Fisher (1973) proposed the useof the observedlikelihoodfunctiondirectlytocommunicate the uncertaintyof a parameter . • Whenexactprobability-basedinference is not available • Alsowhen the samplesize is too small toallowlarge-sampleresults Definition: Likelihood interval A set of parameter values where is a ”cut-off point” and is the normalizedlikelihood.

  4. Likelihood-based intervals Does a certaincut-off pointcorrespondto a specific P-value? Yes, butonlyif the likelihood is regular. Example on page 36 in Pawitan 2001.

  5. Let … be an iidsample from . Assume known. MLE: Normalized log-likelihood: (eq 1) => => (eq 2) So, if for somesignificancelevelwechoose a cut-off: Thenconfidence interval for For instance,

  6. Likeklihood-based CI vs. Wald CI Definition: 95% Wald CI • For a non-regularlikelihoodwecan try tofind a transformation so that it becomesmoreregular and use • For likelihood-based CI we do not needtofind a transformation!

More Related