1 / 13

Validation and Monitoring

Validation and Monitoring. Measures of Accuracy Combining Forecasts Managing the Forecasting Process Monitoring & Control. Technique Adequacy. Are the autocorrelation coefficients of the residuals random? Calculate MAD, MSE, MAPE, & MPE Are the residuals normally distributed?

hammer
Download Presentation

Validation and Monitoring

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Validation and Monitoring • Measures of Accuracy • Combining Forecasts • Managing the Forecasting Process • Monitoring & Control

  2. Technique Adequacy • Are the autocorrelation coefficients of the residuals random? • Calculate MAD, MSE, MAPE, & MPE • Are the residuals normally distributed? • Is the technique simple to use?

  3. Accuracy • Forecasts of groups (for example, by region) are more accurate than forecasts of individual items. Item Actual Forecast Error Percent Error 1 1000 900 100 10% 2 900 1000 -100 -11 3 900 1000 -100 -11 4 1200 1000 200 16.7 4000 3900 100 2.5

  4. Tracking • Beware of GIGO • Large outliers need to be managed - and usually removed • Tracking signals monitor accuracy and identify when intervention is required • When a forecast goes out of control, bias occurs as the errors are primarily positive or negative

  5. Combining Forecasts • There is not one best method of forecasting • Research has shown that forecasts that are averages of other good forecasts are on average better than individual forecasts. • Simple and weighted averages

  6. Combining Methods • If individual methods are unbiased, combined forecast should also be unbiased. • Do not use simple averages when there is a large difference between the variances of the errors. • With weighted averages, higher weights should be assigned to those forecasts that have the lowest error variance. Sum of the weights needs to equal 1. • Can determine weights as the inverse of the squared errors (accuracy and errors are inversely related) • Can determine weights through regression analysis. Models are the independent variables - problematic in that the constant term may not equal 0 and the regression coefficients may not sum to 1.

  7. Past Research on Combining • Process of combining improves accuracy even when compared to single best model • Simple non-weighted averages works well • When one model is considerably better - do not combine, drop the inferior model • In some cases, there are no gains from combining

  8. Recent Research on Combining • Arinze et al (1997) - use a knowledge based system (computer program that includes the knowledge of a human expert) to select a model or combination of models. • Yaylor & Bunn (1999) Hybrid of empirical and theoretical methods that applies quartile regression to empirical fit errors to produce forecast error. • Fisher & Harvey (1999) Results show that providing information about mean absolute percentage errors updated each period enables judges to combine their forecasts in a way that outperforms the simple average. • See Excel output for examples of combining forecasts with averages

  9. Combining Forecasts

  10. Managing the Forecasting Process • Problem definition • Information search • Model formulation • Experimental design • Forecast • Results analysis • Implementation

  11. Managing the Forecasting Process • Need to use common sense - some examples: • In Time Series - need to determine how many time periods to include • Regression - what additional variables can be included to increase R-squared, but maintain parsimony? • Complex forecasting techniques, such as Box-Jenkins that reduce forecast error need to be user friendly - or they won’t be used • Chi-square tests should be used to determine goodness of fit. • Advancement of pc and software has increased the frequency of methods - however, at what cost? Excel’s output of Regression is very professional and easy to incorporate charts - but does not provide autocorrelation analysis

  12. Managing the Forecasting Process • Why is a forecast needed? • Who will use the forecast? • What level of detail or aggregation is required? • What data are available? • Costs with technique and collecting data • How accurate is the forecast expected to be? • How will the forecast be used in the organization? • How will the forecast be evaluated?

  13. Other factors to consider • Selection depends on many factors - content and context of the forecast, availability of historical data, degree of accuracy desired, time periods to be forecast, cost/benefit to the company, time & resources available. • How far in the future are you forecasting? Ease of understanding. How does it compare to other models? • Forecasts are usually incorrect (adjust) • Forecasts should be stated in intervals (estimate of accuracy) • Forecasts are more accurate for aggregate items • Forecasts are less accurate further into the future

More Related