Statistical forecasting predicts the future based on historical data analysis. There are many ways to use statistical forecasting. For instance, there are simple methods in which the values found in recent history are projected into the future. There are also more advanced methods which are used to understand the demand pattern which encompass a longer period of time. Surprisingly, advanced models do not always outperform simpler ones.
The computing power of Honeycomb
Planning Services can determine which statistical models perform best by doing a best fit analysis. During a best fit analysis, we run multiple statistical formulas which have a selection of settings to choose from to find the best fit. This is possible with the computing power of Honeycomb, EyeOn’s state-of-the-art data science platform. It enables data handling of millions of records to generate future forecasts at low granularity (days, weeks) effortlessly.
Revising the best fit analysis?
Is just carrying out best fit analyses at the beginning of a statistical forecasting implementation sufficient? Demand patterns change over time and the model is only as good as its interpretation of the historical data. When the market is moving fast, you need to respond immediately. Products with a short lifecycle in a highly competitive market, such as electronic products, have demand patterns that change within months.
That’s why revising the best fit analysis regularly can improve forecast accuracy. But that raises the following question: How often should you run the best fit analysis?
Stability versus optimization of the forecast
There is a trade-off between putting a lot of effort in improving the forecast accuracy which takes up all the demand planner’s attention and automating the process so that the demand planner can focus his or her effort on where it really adds value. The forecast settings should be as optimally automized as possible, but there is also a need for stability and consistency in the forecast that is generated.
This is why Planning Services can evaluate different update frequencies when revising the best fit models and determine which approach is the most suitable for each of our customers. For example, the image above shows the forecast accuracy for three best-fit update scenarios: revisions once a year, every half year and every month.
Yearly versus monthly revision
The example indicates that monthly revisions in statistical forecasting outperform the other revision frequency scenarios. The forecast accuracy, especially in the months between December and June, is higher for the revised every month scenario. There is somewhat less than a %-point difference between the monthly best fit revision and the average forecast accuracy of the other two revision scenarios. In the end it is up to our customers to decide whether this point difference is significant enough to justify more frequent forecast model revisions.