Session 7 evaluating forecasts
Download
1 / 19

Session 7: Evaluating forecasts - PowerPoint PPT Presentation


  • 133 Views
  • Uploaded on

Session 7: Evaluating forecasts. Demand Forecasting and Planning in Crisis 30-31 July, Shanghai Joseph Ogrodowczyk, Ph.D. Evaluating forecasts. Session agenda Background Measures of accuracy Cost of forecast error

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Session 7: Evaluating forecasts' - andres


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Session 7 evaluating forecasts

Session 7: Evaluating forecasts

Demand Forecasting and

Planning in Crisis

30-31 July, Shanghai

Joseph Ogrodowczyk, Ph.D.


Evaluating forecasts
Evaluating forecasts

  • Session agenda

    • Background

    • Measures of accuracy

    • Cost of forecast error

    • Activity: Produce forecast error calculations for the forecasts made on Day 1

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts1
Evaluating forecasts

Background

How do we measure the accuracy of our forecasts?

How do we know which forecasts were good and which need improvement?

Error can be calculated across products within a given time period or across time periods for a given product

The following examples are for one product over multiple time periods

Two topics of forecast evaluation

How accurate was the forecast?

What was the cost of being wrong?

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts2
Evaluating forecasts

Background

Definitions for evaluation:

Forecast period: The time increment for which the forecast is produced (month, week, quarter)

Forecast bucket: The time increment being forecasted (period, month, quarter)

Forecast horizon: The time increment including all forecast buckets being forecasted (12 months, 8 quarters)

Forecast lag: The time between when the forecast is produced and the bucket that is forecasted

Forecast snapshot: the specific combination of period, horizon, bucket, and lag associated with a forecast

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts3
Evaluating forecasts

Background

Sources of error

Data: Missing or omitted data, mislabeled data

Assumptions: Seasonality is not constant, trend changes are unanticipated, experts have insufficient information

Model: Wrong choice of model type (judgment, statistical), correct model type and misspecified model (missing variables or too many variables), did not account for outliers

Measures of accuracy

Point error

Average error

Trend of error

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts4
Evaluating forecasts

Measures of accuracy

Point error

Error: The difference between the forecasted quantity and the actual demand quantity

Squared error: The square of the error

Percent error: The error relative to the actual demand quantity

Denominator of actuals answers the question: How did well did we predict actual demand?

Denominator of forecast answers the question: How much were we wrong relative to what we said we would do?

Absolute error: The absolute value of the error

Absolute percent error: The absolute value of the error relative to the actual demand quantity

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts5
Evaluating forecasts

Measures of accuracy

Point error

Data from Session 4, Naïve one-step model

One product over multiple time periods

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts6
Evaluating forecasts

Measures of accuracy

Average error

Mean square error (MSE): Sum of the squared errors

Root mean square error (RMSE): Square root of the MSE

Mean percent error (MPE): Average of the percent errors

Mean absolute error (MAE): Average of the absolute errors

Mean absolute percent error (MAPE): Average of the APE

Weighted mean absolute percent error (WMAPE): Weighted average of the APE

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts7
Evaluating forecasts

Measures of accuracy

Average error

One product over multiple time periods

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts8
Evaluating forecasts

Measures of accuracy

Average error

Weighted mean absolute percent error (WMAPE)

Introduced as a method for overcoming inconsistencies in the MAPE

All time periods, regardless of the quantity of sales, have equal ability to affect MAPE

A 12% APE for a period in which 10 units were sold has no more importance than a 12% APE for a period in which 100K units were sold

Weight each APE calculation by the respective quantity

WMAPE=

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts9
Evaluating forecasts

Measures of accuracy

Average error

Weighted mean absolute percent error (WMAPE)

In Session 4, we used a naïve one-step model and forecasted January 2008 using December 2007 data.

Forecast was 88.9 units and actual demand was 88.2

Absolute percent error (APE) = |F-A|/A = |88.9-88.2|/88.2 = .74%

Multiply .74% by 88.2 (actual demand) = .66

.66% is the weighted error value for the January forecast

WMAPE=

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts10
Evaluating forecasts

Measures of accuracy

Average error

Weighted mean absolute percent error (WMAPE)

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts11
Evaluating forecasts

Measures of accuracy

Trend of error

Point error calculations and average error calculations are static

They are calculated for a set time interval

Additional information can be obtained by tracking these calculations over time

How does the error change over time?

Also called the forecast bias

Statistical analysis can be performed on the trending data

Mean, standard deviation, coefficient of variation

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts12
Evaluating forecasts

Measures of accuracy

Trend of error

Two suggested methods

Track a statistic through time (3 month MAPE)

Compare time intervals (Q1 against Q2)

Example is the 2008 naïve one-step forecast

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts13
Evaluating forecasts

Cost of forecast error

Accuracy measures do not contain the costs associated with forecast error

Two methods for incorporating costs

Calculate costs based on percent error and differentiating between over- and under-forecasting

Calculate costs based on a loss function dependent on safety stock levels, lost sales, and service levels

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts14
Evaluating forecasts

Cost of forecast error

Incorporating costs

Error differentiation

Costs are calculated according to the mathematical sign of the percent error (+ or -)

Costs of under-forecasting can be reflected in loss of sales, loss of related goods, increased production costs, increased shipment costs, etc.

Shipment and production costs are associated with production and expediting additional units to meet demand

Costs of over-forecasting can be reflected in excess inventory, increased obsolescence, increased firesale items, etc.

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts15
Evaluating forecasts

Cost of forecast error

Incorporating costs

Loss function

A cost of forecast error metric (CFE) can be used to quantify the loss associated with both under- and over-forecasting

Loss function based on the mean absolute error (MAE)

First part of CFE calculates the necessary unit requirements to maintain a specified service level

This is balanced against the volume of lost sales and associated cost of stock-outs

Plotting a graph of cost of error against different service levels can supply information with regards to the service level corresponding to the lowest cost of forecast error

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts16
Evaluating forecasts

Cost of forecast error

Final notes

Cost of error helps to guide forecast improvement process

These costs can be company specific and can be explored through understanding the implications of shortages and surpluses of products

The specific mathematical calculations are beyond the scope of this workshop

Applying costs to forecast errors will always require assumptions within the models

Recommend explicitly writing assumptions

Changing assumptions will lead to changes in the costs of the errors and can produce a range of estimated costs

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


Evaluating forecasts17
Evaluating forecasts

  • References

    • Jain, Chaman L. and Jack Malehorn. 2005. Practical Guide to Business Forecasting (2nd Ed.). Flushing, New York: Graceway Publishing Inc.

    • Catt, Peter Maurice. 2007. Assessing the cost of forecast error: A practical example. Foresight. Summer: 5-10.

Demand Forecasting and Planning in Crisis 30-31 July, Shanghai


ad