1 / 19

chapter seven

chapter seven. The Two-Variable Model: Hypothesis Testing. The Classical Linear Regression Model. Assumptions The regression model is linear in the parameters The explanatory variables, X, are uncorrelated with the error term

alaura
Download Presentation

chapter seven

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. chapter seven The Two-Variable Model: Hypothesis Testing

  2. The Classical Linear Regression Model • Assumptions • The regression model is linear in the parameters • The explanatory variables, X, are uncorrelated with the error term • Always true if X’s are nonstochastic (fixed numbers as in conditional regression analysis) • Stochastic X’s require simultaneous equations models • Given the value of Xi, the expected value of the disturbance term is zero: E(u|Xi) = 0. See Fig. 7-1.

  3. Figure 7-1 Conditional distribution of disturbances ui.

  4. More Assumptions of the CLRM • The variance of each ui is constant (homoscedastic): var(ui) = σ2 • Individual Y values are spread around their mean with the same variance. See Fig. 7-2(a) • Unequal variance is heteroscedasticity, Fig. 7-2(b) • There is no correlation across the error terms. • Or no autocorrelation. See Fig. 7-3 • Cov(ui, uj) = 0 or the ui are random. • The model is correctly specified (no specification error or specification bias).

  5. Figure 7-2 (a) Homoscedasticity (equal variance);(b) Heteroscedasticity (unequal variance).

  6. Figure 7-3 Patterns of autocorrelation: (a) No autocorrelation;(b) positive autocorrelation; (c) negative autocorrelation.

  7. Variances and Standard Errors • The CLRM assumptions allow us to estimate the variances and standard errors of the OLS estimators. • Note • n - 2 is the degrees of freedom (or n – k) • Standard error of the regression

  8. Table 7-1 Computations for the lotto example.

  9. Gauss-Markov Theorem • Given the assumptions of the CLRM, the OLS estimators are BLUE. • b1 and b2 are linear estimators. • E(b1) = B1 and E(b2) = B2 in repeated applications the means of the estimators converge to the true values (unbiased). • The estimator of σ2 is unbiased. • b1 and b2 are efficient estimators (minimum variance among linear unbiased estimators).

  10. Sampling Distributions of OLS Estimators • The OLS estimators are normally distributed under the assumption that the error term ui of the PRF is normally distributed • b1~ N(B1, σb12), b2~ N(B2, σb22) • ui~ N(0, σ2) • Follows from the Central Limit Theorem and the property that any linear function of a normally distributed variable is normally distributed

  11. Figure 7-4 (Normal) sampling distributions of b1 and b2.

  12. Hypothesis Testing • Suppose we want to test the hypothesis H0: B2 = 0 • As b2 is normally distributed, we could use the standard normal distribution for hypotheses about its mean, except that the variance is unknown. • Use the t distribution • (estimator-value)/se

  13. Figure 7-7 One-tailed t test: (a) Right-tailed; (b) left-tailed.

  14. Coefficient of Determination or r2 • How good is the fitted regression line? • Write the regression relationship in terms of deviations from mean values, then square it and sum over the sample • The parts can be interpreted individually

  15. Coefficient of Determination or r2 • The Total Sum of Squares (TSS) is composed of the Explained Sum of Squares (ESS) and the Residual Sum of Squares (RSS) • The r2 indicates the proportion of the total variation in Y explained by the sample regression function (SRF)

  16. Figure 7-8 Breakdown of total variation Yi.

  17. Reporting Results of Regression Analysis • For simple regression in the Lotto example → • For multiple equations and/or explanatory variables see Table II in schooltrans.doc.

  18. Caution: Forecasting • While we can calculate an estimate of Y for any given value of X using regression results • As the X value chosen departs from the mean value of X, the variance of the Y estimate increases • Consider the Lotto example, Fig. 7-14 • Forecasts of Y for X’s far away from their mean and/or outside the range of the sample are unreliable and should be avoided.

  19. Figure 7-14 95% confidence band for the true Lotto expenditure function.

More Related