1 / 40

Dr. Ka-fu Wong

Dr. Ka-fu Wong. ECON1003 Analysis of Economic Data. Chapter Fourteen. Multiple Regression and Correlation Analysis. GOALS. Describe the relationship between two or more independent variables and the dependent variable using a multiple regression equation.

Download Presentation

Dr. Ka-fu Wong

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dr. Ka-fu Wong ECON1003 Analysis of Economic Data

  2. Chapter Fourteen Multiple Regression and Correlation Analysis GOALS • Describe the relationship between two or more independent variables and the dependent variable using a multiple regression equation. • Compute and interpret the multiple standard error of estimate and the coefficient of determination. • Interpret a correlation matrix. • Setup and interpret an ANOVA table. • Conduct a test of hypothesis to determine if any of the set of regression coefficients differ from zero. • Conduct a test of hypothesis on each of the regression coefficients. l

  3. Multiple Regression Analysis • For two independent variables, the general form of the multiple regression equation is: Yi = b0 + b1 X1i + b2 X2i + ei • X1i and X2i are the i-th observation of independent variables. • b0 is the Y-intercept. • b1 is the net change in Y for each unit change in X1 holding X2 constant. It is called a partial regression coefficient, a net regression coefficient, or just a regression coefficient.

  4. Visualize the multiple linear regression in a plot The simple linear regression model allows for one independent variable, “x” y =b0 + b1x + e y y = b0 + b1x X

  5. Visualize the multiple linear regression in a plot The simple linear regression model allows for one independent variable, “x” y =b0 + b1x + e y y = b0 + b1x Note how the straight line becomes a plain, and... y = b0 + b1x1+ b2x2 X 1 The multiple linear regression model allows for more than one independent variable. Y = b0 + b1x1 + b2x2 + e X2

  6. Visualize the multiple non-linear regression in a plot y y= b0+ b1x2 b0 X 1

  7. Visualize the multiple non-linear regression in a plot y y= b0+ b1x2 b0 X 1 y = b0 + b1x12 + b2x2 … a parabola becomes a parabolic surface X2

  8. Multiple Regression Analysis • The general multiple regression with k independent variables is given by: Yi = b0 + b1 X1i + b2 X2i +…+ bk Xki+ ei • When k>2, it is impossible to visualize the regression equation in a plot. • The least squares criterion is used to develop an estimation of this equation. • Because determining b1, b2, etc. is very tedious, a software package such as Excel or other statistical software is recommended to estimate them.

  9. Choosing the line that fits bestOrdinary Least Squares (OLS) Principle • Straight lines can be describe generally by Yi = b0 + b1 X1i + b2 X2i +…+ bk Xki • Finding the best line with smallest sum of squared difference is the same as • Let b0* , b1* , b2* … bk*be the solution of the above problem. • Y* = b0* + b1*X1+ b2*X2+ …+bk*Xkis known as the “average predicted value” (or simply “predicted value”) of y for any vector of (X1, X2, …, Xk).

  10. Coefficient estimates from the ordinary least squares (OLS) principle • Solving the minimization problem implies the first order conditions:

  11. Coefficient estimates from the ordinary least squares (OLS) principle • Solving the first order conditions implies • The solution of b0* , b1* , b2* … bk* • Y* = b0* + b1*X1+ b2*X2+ …+bk*Xkis known as the “average predicted value” (or simply “predicted value”) of y for any vector of (X1, X2, …, Xk).

  12. Multiple Linear Regression Equations An estimation of the coefficients is too complicated by hand, let me use some computational software packages, such as Excel.

  13. Interpretation of Estimated Coefficients • Slope (k) • Estimated Y Changes by k for Each 1 Unit Increase in XkHolding All Other Variables Constant • Example: If 1 = 2, then Sales (Y) Is Expected to Increase by 2 for Each 1 Unit Increase in Advertising (X1) Given the Number of Sales Rep’s (X2) • Y-Intercept (0) • Average Value of Y When Xk = 0

  14. Parameter Estimation Example You’ve collected the following data: RespSizeCirc 1 1 2 4 8 8 1 3 1 3 5 7 2 6 4 4 10 6 • You work in advertising for the New York Times. You want to find the effect of ad size (sq. in.) & newspaper circulation (000) on the number of ad responses (00).

  15. Parameter Estimates Parameter Standard T for H0: Variable DF Estimate Error Param=0 Prob>|T| INTERCEP 1 0.0640 0.2599 0.246 0.8214 ADSIZE 1 0.2049 0.0588 3.656 0.0399 CIRC 1 0.2805 0.0686 4.089 0.0264 ParameterEstimation Computer Output b0 b2 b1

  16. Interpretation of Coefficients Solution • Slope (b1) • # Responses to Ad is expected to increase by .2049 (20.49) for each 1 sq. in. increase in Ad SizeHolding Circulation Constant • Slope (b2) • # Responses to Ad is expected to increase by .2805 (28.05) for each 1 unit (1,000) increase in circulationHolding Ad Size Constant

  17. Multiple Standard Error of Estimate • The multiple standard error of estimate is a measure of the effectiveness of the regression equation. • It is measured in the same units as the dependent variable. • It is difficult to determine what is a large value and what is a small value of the standard error.

  18. Multiple Standard Error of Estimate • The formula is: Interpretation is similar to that in simple linear regression.

  19. Multiple Regression and Correlation Assumptions • The independent variables and the dependent variable have a linear relationship. • The dependent variable must be continuous and at least interval-scale. • The variation in (Y-Y*) or residual must be the same for all values of Y. When this is the case, we say the difference exhibits homoscedasticity. • The residuals should follow the normal distributed with mean 0. • Successive values of the dependent variable must be uncorrelated.

  20. The ANOVA Table • The ANOVA table reports the variation in the dependent variable. The variation is divided into two components. • The Explained Variation is that accounted for by the set of independent variable. • The Unexplained or Random Variationis not accounted for by the independent variables.

  21. Correlation Matrix • A correlation matrix is used to show all possible simple correlation coefficients among the variables. • See which xj are most correlated with y, and which xj are strongly correlated with each other.

  22. Multicollinearity • High correlation between X variables • Multicollinearity makes it difficult to separate effect of x1 on y from the effect of x2 on y. Leads to unstable coefficients depending on X variables in model • Always exists -- matter of degree • Example: using both age & height as explanatory variables in same model

  23. Detecting Multicollinearity • Examine correlation matrix • Correlations between pairs of X variables are more than with Y variable • Few remedies • Obtain new sample data • Eliminate one correlated X variable

  24. Correlation Analysis Pearson Corr Coeff /Prob>|R| under HO:Rho=0/ N=6 RESPONSE ADSIZE CIRC RESPONSE 1.00000 0.90932 0.93117 0.0 0.0120 0.0069 ADSIZE 0.90932 1.00000 0.74118 0.0120 0.0 0.0918 CIRC 0.93117 0.74118 1.00000 0.0069 0.0918 0.0 Correlation Matrix Computer Output All 1’s rY1: correlation between response and ADSIZE rY2 : correlation between response and CIRC r12: correlation between ADSIZE and CIRC

  25. Global Test • The global test is used to investigate whether any of the independent variables have significant coefficients. The hypotheses are: • The test statistic follows an F distribution with k (number of independent variables) and n-(k+1) degrees of freedom, where n is the sample size.

  26. Test for Individual Variables • This test is used to determine which independent variables have nonzero regression coefficients. • The variables that have zero regression coefficients are usually dropped from the analysis. • The test statistic is the t distribution with n-(k+1) degrees of freedom.

  27. EXAMPLE 1 • A market researcher for Super Dollar Super Markets is studying the yearly amount families of four or more spend on food. Three independent variables are thought to be related to yearly food expenditures (Food). Those variables are: total family income (Income) in $00, size of family (Size), and whether the family has children in college (College).

  28. Example 1 continued Note the following regarding the regression equation. • The variable college is called a dummy or indicator variable. It can take only one of two possible outcomes. That is a child is a college student or not. • Other examples of dummy variables include • gender, • the part is acceptable or unacceptable, • the voter will or will not vote for the incumbent governor. • We usually code one value of the dummy variable as “1” and the other “0.”

  29. EXAMPLE 1 continued

  30. EXAMPLE 1 continued • Use a computer software package, such as Excel, to develop a correlation matrix. • From the analysis provided by Excel, write out the regression equation: Y*= 954 +1.09X1 + 748X2 + 565X3 • What food expenditure would you estimate for a family of 4, with no college students, and an income of $50,000 (which is input as 500)?

  31. EXAMPLE 1 continued The regression equation is Food = 954 + 1.09 Income + 748 Size + 565 Student Predictor Coef SE Coef T P Constant 954 1581 0.60 0.563 Income 1.092 3.153 0.35 0.738 Size 748.4 303.0 2.47 0.039 Student 564.5 495.1 1.14 0.287 S = 572.7 R-Sq = 80.4% R-Sq(adj) = 73.1% Analysis of Variance Source DF SS MS F P Regression 3 10762903 3587634 10.94 0.003 Residual Error 8 2623764 327970 Total 11 13386667

  32. EXAMPLE 1 continued From the regression output we note: • The coefficient of determination is 80.4 percent. This means that more than 80 percent of the variation in the amount spent on food is accounted for by the variables income, family size, and student. • Each additional $100 dollars of income per year will increase the amount spent on food by $109 per year. • An additional family member will increase the amount spent per year on food by $748. • A family with a college student will spend $565 more per year on food than those without a college student.

  33. EXAMPLE 1 continued • The correlation matrix is as follows: Food Income Size Income 0.587 Size 0.876 0.609 Student 0.773 0.491 0.743 • The strongest correlation between the dependent variable and an independent variable is between family size and amount spent on food. • None of the correlations among the independent variables should cause problems. All are between –.70 and .70.

  34. EXAMPLE 1 continued The estimated food expenditure for a family of 4 with a $500 (that is $50,000) income and no college student is $4,491. Y* = 954 + 1.09(500) + 748(4) + 565 (0) = 4491

  35. EXAMPLE 1 continued • Conduct a global test of hypothesis to determine if any of the regression coefficients are not zero. • H0 is rejected ifF>4.07. • From the computer output, the computed value of F is 10.94. • Decision: H0 is rejected. Not all the regression coefficients are zero

  36. EXAMPLE 1 continued • Conduct an individual test to determine which coefficients are not zero. This is the hypotheses for the independent variable family size. • From the computer output, the only significant variable is SIZE (family size) using the p-values. The other variables can be omitted from the model. • Thus, using the 5% level of significance, reject H0 if the p-value<.05

  37. EXAMPLE 1 continued • We rerun the analysis using only the significant independent family size. • The new regression equation is: Y* = 340 + 1031X2 • The coefficient of determination is 76.8 percent. We dropped two independent variables, and the R-square term was reduced by only 3.6 percent.

  38. Example 1 continued Regression Analysis: Food versus Size The regression equation is Food = 340 + 1031 Size Predictor Coef SE Coef T P Constant 339.7 940.7 0.36 0.726 Size 1031.0 179.4 5.75 0.000 S = 557.7 R-Sq = 76.8% R-Sq(adj) = 74.4% Analysis of Variance Source DF SS MS F P Regression 1 10275977 10275977 33.03 0.000 Residual Error 10 3110690 311069 Total 11 13386667

  39. EvaluatingtheModel • Most of the procedures to evaluate the multiple regression model are the same as those discussed in the chapter of simple regression models. • Residual analysis • Test for linearity • Global F-test • Test for the coefficient of correlation irrelevant. • Test for individual slope of regression line not enough. • Non-independence of error variables. • Durbin-Watson Statistics • Outliers yi* = b0 +b1x1i+…+bkxki Evaluating the Model

  40. Chapter Fourteen Multiple Regression and Correlation Analysis - END -

More Related