1 / 24

Week 13 November 24-28

Week 13 November 24-28. Three Mini-Lectures QMM 510 Fall 2014 . Multicollinearity ML 13.1. Chapter 13. What Is Multicollinearity?. Multicollinearity occurs when the independent variables X 1 , X 2 , …, X m are intercorrelated instead of being independent.

sinead
Download Presentation

Week 13 November 24-28

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Week 13 November 24-28 Three Mini-Lectures QMM 510 Fall 2014

  2. MulticollinearityML 13.1 Chapter 13 What Is Multicollinearity? • Multicollinearity occurs when the independent variables X1, X2, …, Xmare intercorrelated instead of being independent. • Collinearity occurs if only two predictors are correlated. • The degree of multicollinearity is the real concern.

  3. Multicollinearity Chapter 13 Variance Inflation • Multicollinearity induces variance inflation in the estimation of the regression model coefficients. • This results in wider confidence intervals for the true coefficients b1, b2, …, bkand makes the t statistic less reliable. • The separate contribution of each predictor in “explaining” the response variable is difficult to identify.

  4. Multicollinearity Chapter 13 Correlation Matrix To check whether two predictors are correlated (collinearity), inspect the correlation matrix using Excel, MegaStat, or MINITAB. For example,

  5. Multicollinearity Chapter 13 Variance Inflation Factor (VIF) • The matrix scatter plots and correlation matrix only show correlations between any two predictors. • The variance inflation factor (VIF) is a more comprehensive test for multicollinearity. • For a given predictor j, the VIF is defined as where Rj2 is the coefficient of determination when predictor j is regressed against all other predictors.

  6. Multicollinearity Chapter 13 Variance Inflation Factor (VIF) Some possible situations are:

  7. Multicollinearity Chapter 13 Rules of Thumb • There is no limit on the magnitude of the VIF. • A VIF of 10 says that the other predictors “explain” 90% of the variation in predictor j. • A high VIFindicates that predictor j is strongly related to the other predictors. • However, a high is not necessarily indicative of instability in the least squares estimate. • A large VIF is a warning to consider whether predictor j really belongs to the model.

  8. Multicollinearity Chapter 13 Are Coefficients Stable? • Evidence of instability is when X1 and X2 have a high pairwise correlation with Y, yet one or both predictors have insignificant t statistics in the fitted multiple regression, and/or if X1 and X2 are positively correlated with Y, yet one has a negative slope in the multiple regression. • As a test, try dropping a collinear predictor from the regression and see what happens to the fitted coefficients in the re-estimated model. • If they don’t change much, then multicollinearity is not a concern. • If there are sharp changes in one or more of the remaining coefficients in the model, then multicollinearity may be causing instability.

  9. Tests of Assumptions 13.2 Chapter 13 Three Important Assumptions • The errors are normally distributed. • The errors have constant variance (i.e., they are homoscedastic). • The errors are independent (i.e., they are nonautocorrelated). Note: Everything you learned about residual tests in Chapter 12 is still true, except that the residuals are now based on multiple predictors (k > 2). This may affect the degrees of freedom in some tests, but the concepts are basically the same.

  10. Residual Tests Chapter 13 Non-Normal Errors • Non-normality of errors is a mild violation since the parameter estimates and their variances remain unbiased and consistent. • Confidence intervals for the parameters may be untrustworthy because the normality assumption is used to justify using Student’s t. Tests for Non-Normal Errors • Quick test: Make a histogram of residuals. Is it bell-shaped? Outliers? • For a more precise test, use the normal probability plot. If H0 is true, the residual plot should be linear. • H0: Errors are normally distributedH1: Errors are not normally distributed

  11. Residual Tests Chapter 13 Example: Normality Test (n = 50, k = 7) MegaStat’s normal probability plot is somewhat linear, but has some weird values at either end. Possible non-normal residuals? But a histogram of residuals from Minitab's Stat > Graphical Summaryis arguably normal in shape. From its p-value (.24) we would not reject normality. Note that the mean of the residuals is zero, as it must be.

  12. Residual Tests Chapter 13 Heteroscedastic Errors (Nonconstant Variance) • Ideally, the error magnitude is constant (i.e., homoscedastic errors). Heteroscedasticerrors increase or decrease with X. • In multiple regression, we have several X’s, so for simplicity we often just plot the n residuals against the fitted Y values. • In the most common form of heteroscedasticity, the variances of the estimators may be understated, the t-statistics overstated, and confidence intervals artificially narrow. Tests for Heteroscedasticity • Plot the residuals against Yfitted or against each of the k predictors. Ideally, there is no pattern in the residuals moving from left to right.

  13. Residual Tests Chapter 13 Example: Regression with 7 Predictors • Plots of residuals against Yfitted (shown below in blue) or against each of the k predictors (shown below in pink) show evidence of heteroscedasticity. Note that one predictor was binary (X = 0 or 1).

  14. Residual Tests Chapter 13 Autocorrelated Errors • Autocorrelation is a pattern of non-independent errors. • It is of more concern in time series data because the natural order of the data is meaningful (whereas in cross-sectional data the order of observatiohs if often alphabetical or randomized). • In a first-order autocorrelation, et is correlated with et  1. • The estimated variances of the OLS estimators are biased, resulting in confidence intervals that are too narrow, overstating the model’s fit.

  15. Residual Tests Chapter 13 Runs Test for Autocorrelation • Look at a plot of residuals over time (or by observation). Count the number of sign reversals (i.e., how often does the residual cross the zero centerline?). • If the pattern is random, the number of sign changes should be near n/2. • Fewer than n/2 would suggest positive autocorrelation. • More than n/2 would suggest negative autocorrelation. Durbin-Watson (DW) Test • Tests for autocorrelation under the hypothesesH0: Errors are non-autocorrelatedH1: Errors are autocorrelated • The DW statistic will range from 0 to 4.DW < 2 suggests positive autocorrelationDW = 2 suggests no autocorrelation (ideal)DW > 2 suggests negative autocorrelation

  16. Residual Tests Chapter 13 Example: Normality Test (n = 50, k = 7) • Here is MegaStat’s plot of residuals by observation (n = 50). Count the number of sign reversals (i.e., how often does the residual cross the zero centerline). If the pattern is random, the number of sign changes should be near n/2 and the Durbin-Watson statistic should be near 2. DW is near 2 so there is not much evidence of autocorrelation. 28 sign changes (close to n/2 = 50/2 = 25) so there is not much evidence of autocorrelation.

  17. Residual Tests Chapter 13 Example: Excel’s Tests of Assumptions Excel’s Data Analysis > Regression does residual plots (test for heteroscedasticity) and gives the DW test statistic. Excel’s standardized residuals are done in a strange way, but usually they are not misleading. Warning: Excel offers normal probability plots for residuals, but they are done incorrectly.

  18. Residual Tests Chapter 13 Example: MegaStat’s Tests of Assumptions MegaStat will do all three tests (if you check the boxes). Its runs plot (residuals by observation) is a visual test for autocorrelation, which Excel does not offer.

  19. Other Regression Topics ML 13.3 Chapter 13 • Outliers? (omit only if clearly errors) • Missing Predictors? (usually you can’t tell) • Ill-Conditioned Data (adjust decimals or take logs) • Significance in Large Samples? (if n is huge, any regression will be significant) • Model Specification Errors? (may show up in residual patterns) • Missing Data? (we may have to live without it) • Binary Response? (if Y = 0,1 we use logistic regression) • Stepwise and Best Subsets Regression (MegaStat does these) 13-19

  20. Tests for Nonlinearity and Interaction Chapter 13 Tests for Nonlinearity • Sometimes the effect of a predictor is nonlinear. • A simple example would be estimating the volume of lumber to be obtained from a tree. • To test for suspected nonlinearity of any predictor variable,we can include its square in the regression. Tests for Interaction • We can test for interaction between two predictors by including their product in the regression. Model with x1x2 interaction term.

  21. Tests for Nonlinearity and Interaction Chapter 13 Tests for Nonlinearity

  22. Stepwise Regression Chapter 13 Example: MegaStat Caution: This is basically a data-mining tool that looks only at fit (not at causal logic). Use it only as a check.

  23. Chapter Summary Chapter 13

  24. Chapter Summary Chapter 13

More Related