1 / 54

BUAD306

BUAD306. Chapter 3 – Forecasting. Everyday Forecasting. Weather Time Traffic Other examples???. What is Forecasting?. Forecast: A statement about the future Used to help managers: Plan the system Plan the use of the system. Use of Forecasts. Forecasting Basics.

Download Presentation

BUAD306

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BUAD306 Chapter 3 – Forecasting

  2. Everyday Forecasting • Weather • Time • Traffic • Other examples???

  3. What is Forecasting? • Forecast: A statement about the future • Used to help managers: • Plan the system • Plan the use of the system

  4. Use of Forecasts

  5. Forecasting Basics • Assumes causal system past ==> future • Forecasts rarely perfect because of randomness • Forecasts more accurate for groups vs. individuals • Forecast accuracy decreases as time horizon increases

  6. Elements of a Good Forecast • Timely – feasible horizon • Reliable – works consistently • Accurate – degree should be stated • Expressed in meaningful units • Written – for consistency of usage • Easy to Use - KISS

  7. Approaches to Forecasting • Judgmental – subjective inputs • Time Series – historical data • Associative – explanatory variables

  8. Judgmental Forecasts • Executive Opinions  Accuracy?? • Outside Opinions Industry experts • Sales Force Feedback  Bias??? • Consumer Surveys  Guarantee???

  9. What would you rather evaluate?

  10. Time Series Forecasts • Based on observations over a period of time • Identifies: • Trend – LT movement in data • Seasonality – ST variations • Cycles – wavelike variations • Irregular Variations – unusual events • Random Variations – chance/residual

  11. Irregularvariation Trend Cycles 90 89 88 Seasonal Variations Forecast Variations Random variation

  12. Naïve Forecasting • Simple to use • Minimal to no cost • Data analysis is almost nonexistent • Easily understandable • Cannot provide high accuracy • Can be a standard for accuracy RULE:“Whatever happened “yesterday” is going to happen tomorrow as long as I apply LOGIC.”

  13. HW Problem 1

  14. HW Problem 1 Muffins Buns Cupcakes

  15. Techniques for Averaging • Moving average • Weighted moving average • Exponential smoothing

  16. n Ai  MAn = i = 1 n Simple Moving Average Where: i = index that corresponds to periods n = number of periods (data points) Ai= Actual value in time period I MA = Moving Average Ft = Forecast for period t

  17. Example 1: Moving Average Four period moving average for period 7: Four period moving average for period 8: Four period moving average for period 9 if actual for 8 = 5025:

  18. Weighted Moving Average • Similar to a moving average, but assigns more weight to the most recent observations. • Total of weights must equal 1.

  19. Example 2: Weighted Moving Average Compute a weighted moving average forecast for period 8 using the following weights: .40, .30, .20 and .10:

  20. HW #2 – Let’s Discuss

  21. Calculating Error • Mathematically: et = At - Ft Let’s discuss examples on board…

  22. Premise - Exponential Smoothing • The most recent observations might have the highest predictive value…. • And since all forecasts have error… • We should give more weight to the error in the more recent time periods when forecasting.

  23. Exponential Smoothing Ft = Ft-1 + (At-1 - Ft-1) Next forecast = Previous forecast +  (Actual -Previous Forecast) Smoothing Constant

  24. About  •  = Smoothing constant selected by forecaster • It is a percentage of the forecast error • The closer the value is to zero, the slower the forecast will be to adjust to forecast errors (greater smoothing) • The higher the value is to 1.00, the greater the responsiveness to errors and the less smoothing

  25. Example 3: Exponential Smoothing Ft = Ft-1 + (At-1 - Ft-1) • Assume a starting forecast of 4030 for period 3. • Given data at left and  = .10, what would the forecast be for period 8?

  26. Example 3: Exponential Smoothing

  27. HW #2 – Let’s Discuss

  28. Techniques for Seasonality • Seasonal Variations – regularly repeating movements in series values that can be tied to recurring events Computing Seasonal Relatives: Although we will discuss how relatives are created in class, you do not have to know this for exam – just how to apply the relatives to a forecast.

  29. Using Seasonal Relatives • Allows you to incorporate seasonality or deseasonalize data • Incorporate: Adds seasonality into the trend forecast so that you can see peaks and valleys. • Deseasonalize: Remove seasonal components to get a clearer picture of non-seasonal components (underlying trend)

  30. Example 4: Using Seasonal Relatives A publisher wants to predict quarterly demand for a certain book for periods 11 and 12, which happen to be in the 3rd and 4th quarters of a particular year. The data series consists of both trend and seasonality. The trend portion of demand is projected using the equation: yt=12,500 + 150.5t. Quarter relatives are Q1= 1.3, Q2=.8, Q3=1.4, Q4=.9 Use this information to predict demand for periods 9 and 16. • The trend values: • Applying the relatives:

  31. HW #11 – Let’s Discuss The following equation summarizes the trend portion of quarterly sales of condos over a long cycle. Prepare a forecast for each Q of next year and the first quarter of the following year. Ft = 40 – 6.5t + 2t2 Ft = unit sales t= 0 at 1Q of last year

  32. Assoc. Forecasting Technique:Simple Linear Regression • Predictor variables - used to predict values of variable interest • Regression - technique for fitting a line to a set of points • Least squares line - minimizes sum of squared deviations around the line

  33. Linear Regression Assumptions • Variations around line are random • No patterns are apparent • Deviations around the line should be normally distributed • Predictions are being made only in the range of observed values • Should use minimum of 20 observations for best results

  34. Suppose you analyze the following data...

  35. The regression line has the following equation: y c = a + bx Where: y c = Predicted (dependent) variable x = Predictor (independent) variable b = slope of the line a = Value of y c when x=0 b = n (xy) - (x)(y) n(x2) - (x)2 a = y - bx n

  36. Example 5 - Linear Regression: Suppose that a manufacturing company made batches of a certain product. The accountant for the company wished to determine the cost of a batch of product given the following data: Cost of batch (in 1000s) $1.4 3.4 4.1 3.8 6.7 6.6 7.8 10.4 11.7 Size of batch 20 30 40 50 70 80 100 120 150 Question… which is the dependent (y) and which is the independent (x) variable?

  37. We are now ready to determine the values of b and a: b = n (xy) - (x)(y) = 9 (5264) - (660)(55.9) n(x2) - (x)29(63600) - (660)2 = 47376-36894 = 10482 = 572400-435600 136800 a = y - bx = 55.9 - .0766(660)= n 9

  38. Our linear regression equation: y c = a + bx y c = What is the cost of a batch of 125 pieces? y c =

  39. Problem #7 Freight car loadings at a busy port are as follows:

  40. Problem #7 b = n (xy) - (x)(y) n(x2) - (x)2 a = y - bx n

  41. Correlation (r) • A measure of the relationship between two variables • Strength • Direction (positive or negative) • Ranges from -1.00 to +1.00 • Correlation close to 0 signifies a weak relationship – other variables may be at play • Correlation close to +1 or -1 signifies a strong relationship

  42. r = n( xy) - ( x)( y) n( x2)- ( x)2 * n( y2) - ( y)2 Calculating a Correlation Coefficient

  43. Example 6: Continued r = 9 (5264) - (660)(55.9) 9(63600)- (660)2 * 9(439.11) - (55.9)2 r = 47376 - 36894 = 10482 = .985 136800 * 827.18 369.86 * 28.76 r = n( xy) - ( x)( y) n( x2)- ( x)2 * n( y2) - ( y)2

  44. Coefficient of Determination (r2) • How well a regression line “fits” the data • Ranges from 0.00 to 1.00 • The closer to 1.0, the better the fit

  45. Example 6: Continued r = .985 r2 = .9852 = .97

  46. Conclusion of Example • R = .985 • Positive, close to one • R2 = .9852 = .97 • Closer to one, the better the fit to the line

  47. Forecast Accuracy • Error - difference between actual value and predicted value • Mean absolute deviation (MAD) • Average absolute error • Mean squared error (MSE) • Average of squared error Why can’t we simply calculate error for each observed period and then select the technique with the lowest error?

  48. Error Example

More Related