1 / 36

Slides 13c: Causal Models and Regression Analysis

Slides 13c: Causal Models and Regression Analysis. MGS3100 Chapter 13. Forecasting. ^. y denote a predicted or forecast value for that variable. In a causal forecasting model, the forecast for the quantity of interest “rides piggyback” on another quantity or set of quantities.

tex
Download Presentation

Slides 13c: Causal Models and Regression Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Slides 13c: Causal Models and Regression Analysis MGS3100 Chapter 13 Forecasting

  2. ^ y denote a predicted or forecast value for that variable. In a causal forecasting model, the forecast for the quantity of interest “rides piggyback” on another quantity or set of quantities. In other words, our knowledge of the value of one variable (or perhaps several variables) enables us to forecast the value of another variable. In this model, let y denote the true value of some variable of interest and

  3. Then, in a causal model, In this representation, the x variables are often called independent variables, whereas yis the dependent or response variable. ^ We either know the independent variables in advance or can forecast them more easily than y. ^ ^ y = f(x1, x2, … xn) where f is a forecasting rule, or function, and x1, x2 , … xi , is a set of variables Then the independent variables will be used in the forecasting model to forecast the dependent variable.

  4. Companies often find by looking at past performance that their monthly sales are directly related to the monthly GDP, and thus figure that a good forecast could be made using next month’s GDP figure. The only problem is that this quantity is not known, or it may just be a forecast and thus not a truly independent variable. To use a causal forecasting model, requires two conditions: 1. There must be a relationship between values of the independent and dependent variables such that the former provides information about the latter.

  5. 2. The values for the independent variables must be known and available to the forecaster at the time the forecast is made. Simply because there is a mathematical relationship does not guarantee that there is really cause and effect. One commonly used approach in creating a causal forecasting model is called curve fitting. CURVE FITTING: AN OIL COMPANY EXPANSION Consider an oil company that is planning to expand its network of modern self-service gasoline stations.

  6. The company plans to use traffic flow (measured in the average number of cars per hour) to forecast sales (measured in average dollar sales per hour). The firm has had five stations in operation for more than a year and has used historical data to calculate the following averages:

  7. The averages are plotted in a scatter diagram.

  8. Now, these data will be used to construct a function that will be used to forecast sales at any proposed location by measuring the traffic flow at that location and plugging its value into the constructed function. Least Squares Fits The method of least squares is a formal procedure for curve fitting. It is a two-step process. 1. Select a specific functional form (e.g., a straight line or quadratic curve). 2. Within the set of functions specified in step 1, choose the specific function that minimizes the sum of the squared deviations between the data points and the function values.

  9. To demonstrate the process, consider the sales-traffic flow example. 1. Assume a straight line; that is, functions of the form y = a + bx. 2. Draw the line in the scatter diagram and indicate the deviations between observed points and the function as di . For example, d1 = y1 – [a +bx1] = 220 – [a + 150b] where y1 = actual sales/hr at location 1 x1 = actual traffic flow at location 1 a = y-axis intercept for the function b = slope for the function

  10. y d3 d1 y = a + bx d5 d4 d2 x The value d12 is one measure of how close the value of the function [a +bx1] is to the observed value, y1; that is it indicates how well the function fits at this one point.

  11. One measure of how well the function fits overall is the sum of the squared deviations: 5 n di2 S S i=1 i=1 (yi – [a +bxi])2 Consider a general model with n as opposed to five observations. Since each di = yi – (a +bxi), the sum of the squared deviations can be written as: Using the method of least squares, select a and b so as to minimize the sum in the equation above.

  12. Now, take the partial derivative of the sum with respect to a and set the resulting expression equal to zero. n n S S i=1 i=1 -2(yi – [a +bxi]) = 0 -2xi (yi – [a +bxi]) = 0 A second equation is derived by following the same procedure with b. Recall that the values for xi and yi are the observations, and our goal is to find the values of a and b that satisfy these two equations.

  13. The solution is: xiyi - yi yi xiyi yi xi xi2 xi xi n n n n n n n n n n n S S S S S S S S S S S b = xi i=1 i=1 i=1 i=1 i=1 i=1 i=1 i=1 i=1 i=1 i=1 2 xi2 - 1 1 1 1 - b a= n n n n The next step is to determine the values for: Note that these quantities depend only on observed data and can be found with simple arithmetic operations or automatically using Excel’s predefined functions.

  14. Using Excel, click on Tools – Data Analysis … In the resulting dialog, choose Regression.

  15. In the Regression dialog, enter the Y-range and X-range. Choose to place the output in a new worksheet called Results Select Residual Plots and Normal Probability Plots to be created along with the output.

  16. Click OK to produce the following results: Note that a (Intercept) and b (X Variable 1) are reported as 57.104 and 0.92997, respectively.

  17. To add the resulting least squares line, first click on the worksheet Chart 1 which contains the original scatter plot. Next, click on the data series so that they are highlighted and then choose Add Trendline … from the Chart pull-down menu.

  18. Choose Linear Trend in the resulting dialog and click OK.

  19. A linear trend is fit to the data:

  20. One of the other summary output values that is given in Excel is: R Square = 69.4% TSS (Total Sum of Squares) ESS (Error Sum of Squares) RSS (Regression Sum of Squares) This is a “goodness of fit” measure which represents the R2 statistic discussed in introductory statistics classes. R2 ranges in value from 0 to 1 and gives an indication of how much of the total variation in Y from its mean is explained by the new trend line. In fact, there are three different sums of errors:

  21. The basic relationship between them is: (Yi– Y )2 (Yi– Y )2 (Yi– Yi)2 n n n S S S i=1 i=1 i=1 TSS = – ^ ESS = ^ – RSS = TSS = ESS + RSS They are defined as follows: Essentially, the ESS is the amount of variation that can’t be explained by the regression. The RSS quantity is effectively the amount of the original, total variation (TSS) that could be removed using the regression line.

  22. RSS R2 = TSS R2 is defined as: If the regression line fits perfectly, then ESS = 0 and RSS = TSS, resulting in R2= 1. In this example, R2= .694 which means that approximately 70% of the variation in the Y values is explained by the one explanatory variable (X), cars per hour.

  23. Now, returning to the original question: Should we build a station at Buffalo Grove where traffic is 183 cars/hour? ^ y = a + b * x The best guess at what the corresponding sales volume would be is found by placing this X value into the new regression equation: Sales/hour = 57.104 + 0.92997 * (183 cars/hour) = $227.29 However, it would be nice to be able to state a 95% confidence interval around this best guess.

  24. We can get the information to do this from Excel’s Summary Output. (Yi– Yi)2 n S i=1 ^ Se = n – k -1 Excel reports that the standard error (Se) is 44.18. This quantity represents the amount of scatter in the actual data around the regression line. The formula for Se is: Where n is the number of data points (e.g., 5) and k is the number of independent variables (e.g., 1).

  25. ESS n – k -1 This equation is equivalent to: Once we know Se and based on the normal distribution, we can state that • We have 68% confidence that the actual value of sales/hour is within + 1Se of the predicted value ($277.29). • We have 95% confidence that the actual value of sales/hour is within + 2Se of the predicted value ($277.29). The 95% confidence interval is: [277.29 – 2(44.18); 227.29 + 2(44.18)] [$138.93; $315.65]

  26. Another value of interest in the Summary report is the t-statistic for the X variable and its associated values. The t-statistic is 2.61 and the P-value is 0.0798. A P-value less than 0.05 represents that we have at least 95% confidence that the slope parameter (b) is statistically significantly than 0 (zero). A slope of 0 results in a flat trend line and indicates no relationship between Y and X. The 95% confidence limit for b is [-0.205; 2.064] Thus, we can’t exclude the possibility that the true value of b might be 0.

  27. Also given in the Summary report is the F –significance. Since there is only one independent variable, the F –significance is identical to the P-value for the t-statistic. In the case of more than one X variable, the F –significance tests the hypothesis that all the X variable parameters as a group are statistically significantly different than zero.

  28. Concerning multiple regression models, as you add other X variables, the R2 statistic will always increase, meaning the RSS has increased. In this case, the Adjusted R2 statistic is a reliable indicator of the true goodness of fit because it compensates for the reduction in the ESS due to the addition of more independent variables. Thus, it may report a decreased adjusted R2 value even though R2 has increased, unless the improvement in RSS is more than compensated for by the addition of the new independent variables.

  29. WHICH CURVE TO FIT? If, for example, a quadratic function fits better than a linear function, why not choose a more general form, thereby getting an even better fit? In practice, functions of the form (with only a single independent variable for illustrative purposes) are often suggested: y = a0 + a1x + a2x2 + … + anxn Such a function is called a polynomial of degree n, and it represents a broad and flexible class of functions. n = 2 quadratic n = 3 cubic n = 4 quartic …

  30. One must proceed with caution when fitting data with a polynomial function. For example, it is possible to find a (k – 1)-degree polynomial that will perfectly fit k data points. To be more specific, suppose we have seven historical observations, denoted (xi , yi), i = 1, 2, …, 7 It is possible to find a sixth-degree polynomial y = a0 + a1x + a2x2 + … + a6x6 that exactly passes through each of these seven data points.

  31. A perfect fit gives zero for the sum of squared deviations. However, this is deceptive, for it does not imply much about the predictive value of the model for use in future forecasting.

  32. Despite the perfect fit of the polynomial function, the forecast is very inaccurate. The linear fit might provide more realistic forecasts. Also, note that the polynomial fit has hazardous extrapolation properties (i.e., the polynomial “blows up” at its extremes).

  33. Reliability and Validity • Does the model make intuitive sense? Is the model easy to understand and interpret? • Are the coefficients statistically significant (p-values less than .05)? • Are the signs associated with the coefficients as expected? • Does the model predict values that are reasonably close to the actual values? • Is the model sufficiently sound (high R2, low standard error, etc.)?

  34. Correlation Coefficient and Coefficient of Determination • Coefficient of determination = r2. • Correlation coefficient = r. • Where: Yi = dependent variable. • Xi = independent variable. • n = number of observations.

  35. Correlation Coefficient and Coefficient of Determination

  36. Summary: Causal Forecasting Models • The goal of causal forecasting model is to develop the best statistical relationship between a dependent variable and one or more independent variables. • The most common model approach used in practice is regression analysis. Only linear regression models are examined in this course. • In causal forecasting models, when one tries to predict a dependent variable using a single independent variable, it is called a simple regression model. • When one uses more than one independent variable to forecast the dependent variable, it is called a multipleregression model.

More Related