1 / 52

Chapter 4 Regression Models

Chapter 4 Regression Models. Prepared by Lee Revere and John Large. Learning Objectives. Students will be able to: Identify variables and use them in a regression model. Develop simple linear regression equations from sample data and interpret the slope and intercept.

Download Presentation

Chapter 4 Regression Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4 Regression Models Prepared by Lee Revere and John Large 4-1

  2. Learning Objectives Students will be able to: • Identify variables and use them in a regression model. • Develop simple linear regression equations from sample data and interpret the slope and intercept. • Compute the coefficient of determination and the coefficient of correlation and interpret their meanings. • Interpret the F-test in a linear regression model. • List the assumptions used in regression and use residual plots to identify problems. 4-2

  3. Learning Objectives (continued) Students will be able to: • Develop a multiple regression model and use it to predict. • Use dummy variables to model categorical data. • Determine which variables should be included in a multiple regression model. • Transform a nonlinear function into a linear one for use in regression. • Understand and avoid common mistakes made in the use of regression analysis. 4-3

  4. Chapter Outline 4.1 Introduction 4.2 Scatter Diagrams 4.3 Simple Linear Regression 4.4 Measuring the Fit of a Regression Model 4.5 Using Computer Software for Regression 4.6 Assumptions of the Regression Model 4-4

  5. Chapter Outline (continued) 4.7 Testing the Model for Significance 4.8 Multiple Regression Analysis 4.9 Binary or Dummy Variables 4.10 Model Building 4.11 Nonlinear Regression 4.12 Cautions and Pitfalls in Regression Analysis 4-5

  6. Introduction Regression analysis is a very valuable tool for today’s manager. Regression is used to: • understand the relationship between variables. • predict the value of one variable based on another variable. Cost estimation models are a good example. 4-6

  7. Introduction (continued) A regression model is comprised of a dependent, or response, variable and an independent, or predictor, variable. Dependent Variable = Independent Variable(s) Prediction Relationship 4-7

  8. Scatter Diagram A scatter diagram is used to graphically investigate the relationship between the dependent and independent variables. • Plot the dependent variable on the Y axis. • Plot the independent variable on the X axis. 4-8

  9. Triple A Construction Example Triple A Construction Company renovates old homes in Albany. They have found that its dollar volume of renovation work is dependent on the Albany area payroll. 4-9

  10. Triple A Construction Example (continued) Scatter Diagram Dependent Variable Independent Variable 4-10

  11. Simple Linear Regression Regression models are used to test if a relationship exists between variables; that is, to use one variable to predict another. However, there is some random error that cannot be predicted. Y = 0 + 1X + error Where, Y = dependent variable (response) X = independent variable (predictor / explanatory) 0 = intercept (value of Y when X = 0) 1 = slope of the regression line Error = random error 4-11

  12. Simple Linear Regression (continued) Sample data are used to estimate the true values for the intercept and slope. Y = b + bX Where, Y = predicted value of Y 0 1 The difference between the actual value of Y and the predicted value (using sample data) is known as the error. Error = (actual value) – (predicted value) e = Y - Y 4-12

  13. Least Squares Regression Least squares regression minimizesthe sum of the squared errors. 4-13

  14. Least Squares Regression Equations Least squares regression equations are: Y = b + bX 0 1 4-14

  15. Calculating the Regression Line: Triple A Construction 2 Summations for each column: 42 24 10 12.5 Y = 42/6 = 7 X = 24/6 = 4 4-15

  16. Calculating the Regression Line (continued) Calculating the required parameters: b = (X-X)(Y-Y) 12.5 (X-X) 10 b = Y – b X = 7 – (1.25)(4) = 2 So, Y = 2 + 1.25 X ∑ = = 1.25 1 2 ∑ o 1 4-16

  17. Using Regression Line If the payroll estimations for next year were $600 million, what is the predicted value of Triple A’s sales? Y = 2 + 1.25 X Sales = 2 + 1.25 (payroll) So, Next year sales = 2 + 1.25 (6) = 9.5 4-17

  18. Measuring the Fit of the Regression Model To understand how well the model predicts the response variable, we evaluate the following: • The variability in the Y variable SST – Total variability about the mean SSE – Variability about the regression line SSR – Variability that is explained • Coefficient of Determination r2 - Proportion of explained variation • Correlation Coefficient r – Strength of the relationship between Y and X variables 4-18

  19. Measuring the Fit of the Regression Model ∑ 2 2 SSE = e = (Y-Y) ∑ Errors (deviations) may be positive or negative. Summing the errors would be misleading, thus we square the terms prior to summing. • Sum of Squares Total (SST) measures the total variable in Y. • Sum of the Squared Error (SSE) is less than the SST because the regression line reduced the variability. • Sum of Squares due to Regression (SSR) indicated how much of the total variability is explained by the regression model. 2 ∑ SST = (Y-Y) 2 SSR = (Y-Y) ∑ 4-19

  20. Measuring the Fit of the Regression Model (continued) 2 SST = (Y-Y) ∑ ∑ 2 2 SSE = e = (Y-Y) ∑ 2 SSR = (Y-Y) ∑ For Triple A Construction: = 22.5 = 6.875 = 15.625 Note: SST = SSR + SSE Explained Variability Unexplained Variability 4-20

  21. Coefficient of Determination The coefficient of determination (r2 ) is the proportion of the variability in Y that is explained by the regression equation. r2 = SSR = 1 – SSE SST SST For Triple A Construction: r2 = 15.625 = 0.6944 22.5 69% of the variability in sales is explained by the regression based on payroll. Note: 0 < r2< 1 4-21

  22. Correlation Coefficient The correlation coefficient (r) measures the strength of the linear relationship. For Triple A Construction, r = 0.8333 Note: -1 < r < 1 4-22

  23. Correlation Coefficient (continued) 4-23

  24. Computer Software for Regression In Excel, use Tools/ Data Analysis. This is an ‘add-in’ option. 4-24

  25. Computer Software for Regression (continued) After selecting the regression option, this will appear X and Y ranges Specify labels if included in range Output area Scatter diagram output Residual (error) output 4-25

  26. Computer Software for Regression (continued) A scatter diagram will be given. Multiple r is correlation coefficient (r) 2 High r (close to 1) Regression coefficients 4-26

  27. Assumptions of the Regression Model We make certain assumptions about the errors in a regression model which allow for statistical testing. Assumptions: • Errors are independent. • Errors are normally distributed. • Errors have a mean of zero. • Errors have a constant variance. 4-27

  28. Residual Analysis X 0 Residual analyses (plots) will highlight glaring violations of the assumptions. Healthy Residual Plot – no violations 4-28

  29. Residual Analysis: Nonlinear Violation 0 X Nonlinear Residual Plot –violation 4-29

  30. Residual Analysis: Nonconstant Error 0 X Nonconstant Error Residual Plot –violation 4-30

  31. Estimating the Variance The mean squared error (MSE) is the estimate of the error variance of the regression equation. s2 = MSE = SSE/(n-k-1) Where, n = number of observations in the sample k = number of independent variables For Triple A Construction, s = 1.7188 2 4-31

  32. Estimating the Variance (continued) The standard deviation of the regression is used in many statistical tests about the regression model. s = MSE For Triple A Construction, s = 1.31 4-32

  33. Testing the Model for Significance: F-test An F-test is used to statistically test the null hypothesis that there is no linear relationship between the X and Y variables (i.e. β = 0). If the significance level for the F test is low, we reject H0 and conclude there is a linear relationship. 1 F = MSR / MSE where, MSR = SSR k 4-33

  34. Testing the Model for Significance: F-test For Triple A Construction: MSR = 15.625 = 15.625 1 F = 15.625 = 9.0909 1.7188 The significance level for F = 9.0909 is 0.0394, indicating we reject Ho and conclude a linear relationship exists between sales and payroll. 4-34

  35. Testing the Model for Significance: R2 r2is the best measure of the strength of the prediction relationship between the X and Y variables. • Values closer to 1 indicate a strong prediction relationship. • Good regression models have significant F-test and high r2 values. 4-35

  36. Testing the Model for Significance: Coefficient Hypotheses Statistical tests of significance can be performed on the coefficients. The null hypothesis is that the coefficient of X (i.e., the slope of the line) is 0. • P values are the observed significance level and can be used to test the null hypothesis. • For a simple linear regression the test of the regression coefficients gives the same information as the F-test. 4-36

  37. ANOVA Tables When developing a regression model, an ANOVA table is computing by most statistical software. The general form of the ANOVA table is helpful for understanding the interrelatedness of error terms. 4-37

  38. Multiple Regression Multiple regression models are similar to simple linear regression models except they include more than one X variable. Y = b + bX + b X +…+ b X 0 1 1 2 2 n n slope Independent variables 4-38

  39. Multiple Regression: Wilson Realty Example Wilson Realty wants to develop a model to determine the suggested listing price for a house based on size and age. 4-39

  40. Wilson Realty Example (continued) 67% of the variation in sales price is explained by size and age. Ho: No linear relationship is rejected Y = 60815.45 + 21.91(size) – 1449.34 (age) Ho: β1 = 0 is rejected Ho: β2 = 0 is rejected 4-40

  41. Wilson Realty Example (continued) Wilson Realty has found a linear relationship between price and size and age. The coefficient for size indicates each additional square foot increases the value by $21.91, while each additional year in age decreases the value by $1449.34. Ŷ = 60815.45 + 21.91(size) – 1449.34 (age) For a 1900 square foot house that is 10 years old, the following prediction can be made: $87,951 = 21.91(1900) + 1449.34(10) 4-41

  42. Binary Variables Binary (or dummy) variables are special variables that are created for qualitative data. • A dummy variable is assigned a value of 1 if a particular condition is met and a value of 0 otherwise. • The number of dummy variables must equal one less than the number of categories of the qualitative variable. 4-42

  43. Wilson Realty Example: Binary Variables Return to Wilson Realty, and let’s evaluate how to use property condition in the regression model. There are three categories: Mint, Excellent, and Good. X = 1 if the house is in excellent condition = 0 otherwise X = 1 if the house is in mint condition = 0 otherwise Note: If both X and X = 0 then the house is in good condition 3 4 4 3 4-43

  44. Wilson Realty: Binary Variables (continued) What can you say about the new model? Y = 48329.23 + 28.21 (size) – 1981.41(age) + 23684.62 (if mint) + 16581.32 (if excellent) 4-44

  45. Model Building • As more variables are added to the model, the r2 usually increases . • The adjusted r2 takes into account the number of independent variables in the model. The best model is a statistically significant model with a high r2 and a few variables. Note: When variables are added to the model, the value of r2 can never decrease; however, the adjusted r2 may decrease. 4-45

  46. Model Building (continued) • Collinearity and multicollinearity create problems in the coefficients. • The overall model prediction is still good; however interpretation of the individual variable coefficients is questionable. Collinearity or multicollinearity exists when an independent variable is correlated with one or more independent variable(s). 4-46

  47. Nonlinear Regression Nonlinear relationships may exist between variables, thereby requiring a transformation of one or more variables to achieve linearity. • Transformations may be used to turn a nonlinear model into a linear model. 4-47

  48. Automobile Example: Nonlinear Regression Engineers at Colonel Motors want to use regression analysis to improve fuel efficiency. They are studying the impact of weight on miles per gallon (MPG). 4-48

  49. Automobile Example (continued) Perhaps a nonlinear relationship exists? Linear regression line Nonlinear regression line 4-49

  50. Automobile Example (continued) • Linear regression model: • MPG = 47.8 – 8.2 (weight) • F significance = 0.0003 • r2 = 0.7446 • Nonlinear (transformed variable) regression model • MPG = 79.8 – 30.2(weigth) + 3.4 (weight) F significance = 0.0002 • R2 = 0.8478 2 Which model is best? What are the difficulties with interpreting the individual coefficients? 4-50

More Related