1 / 16

Last lecture

Last lecture. Beta-binomial model All types of posterior inference one can make: mean, mode, variance, two types of credible intervals,etc. Prior predictive, posterior predictive Relationship between prior mean and posterior mean, prior variance & post var. Conjugate priors.

lydia
Download Presentation

Last lecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Last lecture • Beta-binomial model • All types of posterior inference one can make: mean, mode, variance, two types of credible intervals,etc. • Prior predictive, posterior predictive • Relationship between prior mean and posterior mean, prior variance & post var. • Conjugate priors

  2. Sequential Analysis with conjugate prior beta(α,β) data: y1,n1 beta(α+y1,β+n1-y1) data: y2,n2 beta(α+y1+y2, β+n1+n2-y1-y2) ……

  3. Ex. Multinomial Distribution • Example: 1988 CBS pre-election poll: 727 support Republican Candidate, 583 Dem, 137 other Parameters of interest: winning chance for each party. • likelihood=? • What is a conjugate prior for this likelihood?

  4. How to express prior ignorance? • So that the prior plays a minimal role in posterior inference • uniform distribution  likelihood only (MLE) • Jeffrey’s prior

  5. Jeffrey’s rule for noninformative priors • They should be invariant to parameter transformations • For example: • uniform distribution for a proportion: θ • uniform distribution for an odds: θ/(1-θ) • uniform distribution for log odds: log θ/(1-θ)

  6. Jeffrey’s choice of prior • square root of the Fisher information

  7. Jeffrey’s prior for binomial model

  8. Proper vs. Improper • Proper=integrable For example: Jeffrey’s prior is proper beta(1/2, ½) \propto θ-1/2(1-θ)-1/2 θ-1(1-θ)-1 is improper Problem of improper prior: posterior can be improper for certain data. Inference can be problematic…

  9. Reference prior • A type of prior that influences posterior the least • An automated way of constructing prior doesn’t seem to exist.

  10. Normal model

  11. Variance known • y~N(θ, σ2), y=(y1,…,yn) σ2 known • Parameter of interest: mean θ =? • likelihood: • What is a conjugate prior for θ?

  12. Informative Prior for θ

  13. Noninformative Prior for θ

  14. Variance Unknown Mean Known • y~N(θ, σ2), y=(y1,…,yn) θ known • Parameter of interest: σ2 =? • likelihood: • What is a conjugate prior for θ?

  15. Informative Prior for σ2

  16. Noninformative prior for σ2

More Related