Worst practices in business forecasting

September 3, 2010

Recently I discovered a truly great online magazine called Analytics which covers applied topics about mathematics, operations research, and statistics to drive business decisions (to register for free click here). This post is an excursion into forecasting about the article Worst practices in business forecasting written by Michael Gilliland and Udo Sglavo published in the July/August issue of Analytics.

Here a summary of the eight worst practices described by the authors:

1. Overly complex and politicized forecasting process

Being aware of steps and participants that do not make the forecast considerably better is crucial to an efficient result. The authors recommend to use a method called Forecast Value Added (FVA) analysis which identifies process waste.
Another common shortcoming is that contributors tend to influence the forecasting process with respect to their own special interest. Thus biases occur through human touch points although the results should come from an objective and scientific process.

2. Selecting model solely on “fit to history”

The forecasting aim is to detect an appropriate model for predicting future behavior and not to fit a model to history. Overfitted models to randomness in (past) behavior are seldom a suitable forecasting model.

3. Assuming model fit = forecast accuracy

In general, a models’ fit to history does not indicate the accuracy of the future forecast. Indeed, it is quite common that forecast accuracy is often worse than the models accuracy of the fit to history.

4. Inappropriate accuracy expectations

No matter how hard we try to build an appropriate model, our forecast will always be limited by the nature of the behavior we try to predict. If the behavior is unstructured or unstable, then often we are better off utilizing a naive forecasting model and use this as a baseline model.

5. Inappropriate performance objectives

Think about tossing a fair coin over a large number of trials and the reasonable forecast accuracy; we will be correct about 50 percent of the time, thus an objective of achieving 60 percent accuracy is not possible.

6. Perils of industry benchmarks

Industry benchmarks for forecasting performance should be ignored considering following thoughts:

1) Can you trust the benchmark data?
2) Is measurement consistent across the respondents e.g. time frame, metric?
3) Is the comparison to the benchmark even relevant?
4) How forecastable is the benchmark companies’ demand patterns?

Instead of utilizing a benchmark, employ the naive model and steadily try to improve the process.

7. Adding variation to demand

Prediction accuracy is highly dependent on demand volatility, thus the objective should be to reduce this volatility. Unfortunately, in reality most companies add more volatility to their products demand which makes an efficient forecasting even more difficult.

8. New product forecasting

Concerning new products there is no historical data to rely on for prediction. Hence, the forecast might be based on judgment of the product manager who is biased towards the products success. Another approach is forecasting by analogy (similar products) but the analyst must be careful not to solely use data of successful products. Nevertheless, which method is applied, the most crucial point is being aware of the uncertainties of the outcome. A more reliable alternative is a structured analogy approach which helps to assess the scenarios of historical outcomes.