Let’s kick off with a hard-hitting statistic: 77% of businesses are investing in technology to create better supply chain visibility. But here’s the intriguing part: those who’ve already taken the plunge were reportedly twice as likely as others to avoid supply chain problems.
Supply chain visibility is therefore a top priority for supply chain leaders. But, in an era where AI and machine-learning-powered tools are transforming the landscape of forecasting, there’s a crucial planning paradox we cannot afford to ignore, and that is that no forecast is immune from error.
Thankfully, there are statistical tools and techniques that can help you close the gap between projected and actual demand.
Hence, in today’s blog, we will delve into the topic of forecast accuracy and how you can use this powerful statistical analysis technique to build more robust forecasts and create better visibility throughout your entire supply chain.
Let’s start with the fundamentals.
What is forecast accuracy?
Forecast accuracy is a method you can use to judge the quality of your forecasts. In the context of supply chain planning, forecast accuracy refers to how closely the predicted demand for products or services matches the actual demand.
The result of this analysis can help ensure more effective decision-making. But make no mistake; it’s not a silver bullet you can utilise as a quick fix to cover your forecasting shortfalls.
Even establishing what margin of error to allow for can be troublesome. After all, how can you define what ‘accurate’ looks like?
100% accuracy would be a dream, but it’s often more of an idealised benchmark than a realistic outcome. Likewise, 75% sounds reasonable, but whether it’s achievable, or even useful as a target, is almost impossible to answer.
It completely depends on the goals of your company, the nature of your customers’ purchasing behaviour and the data you have at your disposal.
How to measure forecast accuracy
While forecasting your anticipated sales success is valuable, the true insight often lies in assessing the accuracy of that forecast. To achieve this, it’s essential to establish a way of assigning a performance score to it. In many cases, this score provides a more meaningful perspective than the forecast alone.
The fundamentals of measuring forecast accuracy
Before we continue, it’s important to cover the basics.
All measures are based on the forecast error, e. This error is the difference between the forecast, f, ie the predicted demand, and the actual demand, d, within a certain time period:
A forecast is good when the error measure is small.
Sometimes, however, the performance is focused on accuracy rather than the degree of error. In this case, a forecast is good when the accuracy is close to 100%.
There are several methods to assess forecast accuracy. And each one has positives and negatives attached. Below is a guide to the most common ones at your disposal.
Bias or mean error (ME)
The first forecast accuracy measure, called the bias or mean error (ME), is the average of the forecasting error:
This measure is easy to understand. For a good forecast, the difference between the predictions and the actual demand is small, so its bias is close to zero.
A positive bias indicates that you’re predicting too much demand, whereas a negative bias means that you’re underestimating it.
A drawback of this model, however, is that positive and negative errors cancel each other out. A forecast with large errors can still have a small bias.
Therefore, you should never consider the bias alone, but also look at other measures for forecast accuracy.
Mean absolute error (MAE)
A model that’s a direct indication of the magnitude of errors is the mean absolute error (MAE):
The advantage of this model is that it uses absolute forecasting errors, so a small MAE means that all forecasting errors are close to zero.
It’s also an easy measure to interpret.
However, it doesn’t show how large this average error is, compared to the actual demand. A forecast that’s off by 5 pieces is very bad for a product that sells 10 on average, but good for a product that achieves an average of 1,000 sales.
Mean absolute percentage error (MAPE)
The mean absolute percentage error (MAPE) reflects how large the errors are compared to the actual demand.
It’s defined as the average ratio between the forecast error and the actual demand:
Because of this, it’s also easy to interpret.
The MAPE indicates how far the forecast is off on average, as a percentage.
However, this forecast accuracy method also has its limitations. Overestimating demand is punished more than underestimations. Predicting 30 pieces when the actual demand was only 10 pieces gives a MAPE of 200%, but underestimations give a maximum MAPE of only 100%.
This is an issue for products with little demand, where it’s more difficult to obtain a small MAPE. But it’s an even bigger problem for time periods with no demand, as you’d be dividing by zero.
Root mean squared error (RMSE)
The final forecast accuracy model to mention is the root mean squared error (RSME).
As the name suggests, this measure is based on the square root of the forecasting errors:
This model is similar to, and therefore comparable with, the MAE, but punishes larger errors much more than smaller ones.
It’s a good measure to see if the forecasted and actual sales are always close to each other. Unfortunately though, this makes this particular model more difficult to interpret.
If your demand data contains an occasional outlying sale, which you don’t expect a forecast to capture, you should use the MAE method instead, as it is much more robust to outliers.
The mean squared error (MSE) is almost the same as the RMSE, but it doesn’t use the additional square root. Therefore, it expresses the error in squared units, making the MSE more difficult to interpret.
There are many other forecast accuracy measures available. Which one you should use will depend on the data you have at your disposal, and it may be worth checking out some of the rarer alternatives to see if there’s a better fit.
What does ‘good’ forecast accuracy look like?
The sheer amount of forecast accuracy models throws up questions like, “Which forecast accuracy measure should I use?”, and, “What should my forecast accuracy target be?”
Unfortunately, neither of these are easy to answer.
Let’s say you manage the forecast for your products and find out that, on average, the MAE is 60%.
Is that a good or a bad score?
A forecast accuracy score by itself doesn’t amount to much. The future demand for a product is To assess the quality of a forecast, you need an idea of how predictable the demand is. And that depends on many factors.
The demand for products with large volumes is usually easier to predict than slow movers. It’s also easier to predict the demand for a product in a group of stores than to accurately capture the demand in each store separately.
Furthermore, long-term predictions are much harder than forecasting short-term demand. The attainable forecast accuracy also depends on the amount of relevant data you have available.
If important information is missing, your forecasting models won’t perform well. After all, you can only make predictions based on what you know.