Worst practices in business forecasting

Recently I discovered a truly great online magazine called Analytics which covers applied topics about mathematics, operations research, and statistics to drive business decisions (to register for free click here). This post is an excursion into forecasting about the article Worst practices in business forecasting written by Michael Gilliland and Udo Sglavo published in the July/August issue of Analytics.

Here a summary of the eight worst practices described by the authors:

1. Overly complex and politicized forecasting process

Being aware of steps and participants that do not make the forecast considerably better is crucial to an efficient result. The authors recommend to use a method called Forecast Value Added (FVA) analysis which identifies process waste.
Another common shortcoming is that contributors tend to influence the forecasting process with respect to their own special interest. Thus biases occur through human touch points although the results should come from an objective and scientific process.

2. Selecting model solely on “fit to history”

The forecasting aim is to detect an appropriate model for predicting future behavior and not to fit a model to history. Overfitted models to randomness in (past) behavior are seldom a suitable forecasting model.

3. Assuming model fit = forecast accuracy

In general, a models’ fit to history does not indicate the accuracy of the future forecast. Indeed, it is quite common that forecast accuracy is often worse than the models accuracy of the fit to history.

4. Inappropriate accuracy expectations

No matter how hard we try to build an appropriate model, our forecast will always be limited by the nature of the behavior we try to predict. If the behavior is unstructured or unstable, then often we are better off utilizing a naive forecasting model and use this as a baseline model.

5. Inappropriate performance objectives

Think about tossing a fair coin over a large number of trials and the reasonable forecast accuracy; we will be correct about 50 percent of the time, thus an objective of achieving 60 percent accuracy is not possible.

6. Perils of industry benchmarks

Industry benchmarks for forecasting performance should be ignored considering following thoughts:

1) Can you trust the benchmark data?
2) Is measurement consistent across the respondents e.g. time frame, metric?
3) Is the comparison to the benchmark even relevant?
4) How forecastable is the benchmark companies’ demand patterns?

Instead of utilizing a benchmark, employ the naive model and steadily try to improve the process.

7. Adding variation to demand

Prediction accuracy is highly dependent on demand volatility, thus the objective should be to reduce this volatility. Unfortunately, in reality most companies add more volatility to their products demand which makes an efficient forecasting even more difficult.

8. New product forecasting

Concerning new products there is no historical data to rely on for prediction. Hence, the forecast might be based on judgment of the product manager who is biased towards the products success. Another approach is forecasting by analogy (similar products) but the analyst must be careful not to solely use data of successful products. Nevertheless, which method is applied, the most crucial point is being aware of the uncertainties of the outcome. A more reliable alternative is a structured analogy approach which helps to assess the scenarios of historical outcomes.

Advertisements

3 Responses to Worst practices in business forecasting

  1. Hi Burcu, thank you for the nice write-up on our “worst practices” article in Analytics magazine. I agree that Analytics is a very useful magazine for professionals in this field, and anyone can sign up for a free subscription at http://analyticsmagazine.com/.
    –Mike

  2. Al says:

    Can you explain No.5?
    Probability of tossing a coin is simple to understand but how do you justify what is “appropriate” in performance objectives?

  3. Hi Al, we used forecasting Heads or Tails in the tossing of a fair coin — and having an objective to be correct 60% of the time — to provide a simple example of an unreasonable (and unachievable) forecasting performance objective.

    In the general business situation, we suggest using a “naive” forecasting model (such as random walk, seasonal random walk, or moving average) to determine what forecast accuracy you could achieve by essentially “doing nothing” and just using the naive model to generate your forecasts.

    A reasonable performance objective is then to do better (or at least do no worse!) than the naive model. Thus, if the naive model achieves a MAPE of 35% with your data, then a reasonable performance objective for your forecasting process is to achieve a MAPE no worse than 35%.

    Note that we do not know in advance what MAPE a naive model will achieve, so I cannot give you a specific numerial objective for 2012. However, I can state the objective as “achieve MAPE no higher than what the naive model achieves” and then track performance as the 2012 actuals roll in.

    For more discussion of these kinds of topics, see my blog The Business Forecasting Deal at http://blogs.sas.com/forecasting.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: