1 / 19

Topic 10: Miscellaneous Topics

Topic 10: Miscellaneous Topics. Outline. Joint estimation of β 0 and β 1 Multiplicity Regression through the origin Measurement error Inverse predictions. Joint Estimation of β 0 and β 1. Confidence intervals are used for a single parameter

halona
Download Presentation

Topic 10: Miscellaneous Topics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Topic 10: Miscellaneous Topics

  2. Outline • Joint estimation of β0 and β1 • Multiplicity • Regression through the origin • Measurement error • Inverse predictions

  3. Joint Estimation of β0 and β1 • Confidence intervals are used for a single parameter • Confidence regions for two or more parameters • The region for (β0, β1) defines a set of lines…that form a band about the estimated regression line (Topic 5)

  4. Joint Estimation of β0 and β1 • Since β0 and β1 are (jointly) Normal, the natural (i.e., smallest) confidence region is an ellipse (STAT 524) • Text consider rectangles (KNNL 4.1) (i.e., region formed from the union of two separate intervals) • Need to adjust confidence level of each CI so region has proper a level

  5. Bonferroni Correction • We want the probability that both intervals are correct to be ≥ 0.95 • Basic idea is an error budget • Spend half on β0 and half on β1 • Since a=0.05, we use α* =0.025 for eachCI ( consider 97.5% CIs)

  6. Bonferroni Correction • For joint region of (β0, β1), use b1 ± tcs(b1) b0 ± tcs(b0) where tc = t(.9875, n-2) Note: .9875 = 1 – (.05)/(2*2)

  7. Expanding on the Note • We start with a 5% error budget. • We have two intervals so we give 0.05/2=2.5% to each • Each interval is two-sided so we again divide by 2 • Thus 0.9875 = 1 – (.05)/(2*2)

  8. Bonferroni Concept • Theory behind this correction • Let the two intervals be I1 and I2 • We will use c if the interval contains the true parameter value, nc if the interval does not contain the true parameter

  9. Bonferroni Inequality • P(both c)=1-P(at least one nc) • P(at least one nc) = P(I1nc) + P(I2nc) - P(both nc) ≤ P(I1nc) + P(I2nc) • Thus, P(both c) ≥ 1-(P(I1nc) + P(I2nc))

  10. Green area on left is greater than green area on the right .025 .025 <.025 .025

  11. Bonferroni Inequality • P(both c) ≥ 1-(P(I1nc) + P(I2nc)) • So if we use 0.05/2 for each interval, 1- (P(I1nc) + P(I2nc)) = 1 – 0.05 =0.95 • So P(both cor) is at least 0.95 • We will use this same idea when we do multiple comparisons in ANOVA

  12. Joint Estimation of β0 and β1 • For Toluca example, rectangular region is 8.20 ≤ b0≤ 116.5 2.85 ≤ b1≤ 4.29 • Region shown on next page…all lines when X positive between 116.5 + 4.29X 8.2 + 2.85X

  13. Definitely not as small nor symmetric about mean X as the confidence band

  14. Mean Response CIs • Simultaneous estimation for all Xh, uses Working-Hotelling (KNNL 2.6) ± Ws( ) where W2=2F(1-α; 2, n-2) • For simultaneous estimation for a fewXh, use Bonferroni. Let g=# of Xh. Then ± Bs( ) where B=t(1-α/(2g), n-2) • Use this when B < W  narrower CIs

  15. Simultaneous PIs • Simultaneous prediction for a few Xh, use • Bonferroni ± Bs(pred) where B=t(1-α/(2g), n-2) • Scheffé ± Ss(pred) where S2 = gF(1-α; g, n-2) • Again choose one with narrower intervals

  16. Regression through the Origin • Yi = β1Xi + ei • NOINT option in PROC REG • Generally not a good idea • Might be forcing model to behave certain way in area with no data • Problems with R2 and other statistics • See cautions, KNNL p 164

  17. Measurement Error • For Y, this is usually not a problem…just adds to variance s • For X, we can get biased estimators of our regression parameters • See KKNL 4.5, pp 165-168 • Berkson model: special case where measurement error in X is no problem

  18. Inverse Predictions • Sometimes called calibration • Given Yh, predict the corresponding value of X, • Solve the fitted equation for Xh • = (Yh – b0)/b1, b1≠ 0 • Approximate CI can be obtained, see KNNL, p 169

  19. Background Reading • Next class we will do simple regression with vectors and matrices so that we can generalize to multiple regression • Look at KNNL 5.1 to 5.7 if this is unfamiliar to you

More Related