The Absolute Best Way to Measure Forecast Accuracy
Good demand forecasts are accurate demand forecasts. Today, I’m going to talk about the absolute best metric to use in the forecasting process. Let’s start with a sample demand forecast.
The following table represents the forecast and actual demand for customer traffic at a small-box, specialty retail store, but all the same principles would also apply to foot traffic in a department within a larger store.
Is this a good or a bad forecast? How would you go about assessing forecast accuracy in this case?
Weekly vs. Daily Forecasts
Certainly, the weekly demand forecast is good. After all, the forecast says that 582 customers would visit the store, and by the end of the week, 582 customers did visit the store. The problem is the daily demand forecasting.
There are some big swings, particularly towards the end of the week, that cause labor to be misaligned with demand. Since we’re trying to align labor to demand, understanding these swings – these forecast errors – is important to improve forecast accuracy calculations.
It’s easy to look at this forecast and spot the problems, however, it becomes much more difficult to spot forecast errors at scale. For a large number of stores over a long period of time, monitoring forecast accuracy can be extremely challenging.
Overcoming the Challenges
To overcome these obstacles, you’ll want to use a metric that succinctly summarizes forecast accuracy. This way, you can look at many data points and compare forecasts. This is useful when you want to determine if one method of forecasting future demand is better than another.
Such forecast accuracy measures can be helpful to determine if the forecasting process used by a workforce management system is better than the one provided by the finance department, or if forecast accuracy is trending in the right direction.
Understanding Forecast Error Calculations
I frequently see retailers use a simple method for calculating forecast error. Formally referred to as “Mean Percentage Error” (MPE), it is calculated as follows:
MPE = ((Actual – Forecast) / Actual) x 100
Applying this calculation to Sunday in our table above, we can quickly find the forecast error for that day is –3.9 percent.
MPE = ((79 – 81) / 79) x 100 = –3.9
This means that the actual results were 3.9 percent less than what was forecasted.
Mean Percentage Error
The benefits of MPE are that it is easy to calculate and the results are easily understood. Statisticians and math-heads like to use complex methods to calculate forecast accuracy, which can be intimidating by name and produce results that are not intuitively understood (Root Mean Square Error, anyone?).
The problem is that when you start to summarize MPE for multiple forecasts, the aggregate value doesn’t represent the error rate of the individual MPEs. Consider the following table:
How accurate are all forecasts for the week? By averaging each day’s forecast, I get –0.2 percent. Hmmm…
Does -0.2 percent accurately represent last week’s error rate? No, absolutely not. The most accurate forecast was on Sunday at –3.9 percent while the worst forecast was on Saturday at –23.5 percent!
MAPE – “Mean Absolute Percentage Error”
The problem is that the negative and positive values cancel each other out when averaged. Fortunately, there is an easy way to fix the problem by using “Mean Absolute Percentage Error” (MAPE), which is calculated as:
MAPE = (Absolute Value(Actual – Forecast) / Actual) x 100
Mean Absolute Percentage Error
MAPE is remarkably similar to MPE with one big exception. The exception is that you take the absolute deviation between the actual and forecast. Let’s see how the calculation works for Sunday:
MAPE = (Absolute Value(79 – 81) / 79) x 100 = 3.9
As you can see, the absolute value removes the negative value. This allows us to summarize multiple values and get a better sense of what the absolute percent error rate of our forecasts is:
As you can see the aggregate value of MAPE is 10.8. This is a much more representative measure of our overall forecast quality than the –0.2 percent that we got from MPE.
MAPE delivers the same benefits as MPE (easy to calculate, easy to understand) plus you get a better representation of the true forecast error. Some argue that by eliminating the negative value from the daily forecast, we lose sight of whether we’re over or under forecasting.
The question is: does it really matter?
When it comes to labor forecasting, being above actuals means that you’re using too much labor and wasting payroll. Being below actuals means that you’re missing opportunity and adversely affecting customer experience.
Learn More About Axsium’s Forecasting Strategies
This post is part of the Axsium Retail Forecasting Playbook, a series of articles designed to give retailers insight and techniques into forecasting as it relates to the weekly labor scheduling process.
In my next post in this series, I’ll give you three rules for measuring forecast accuracy. Then, we’ll start talking about how to improve forecast accuracy.
For the introduction to the series and other posts in the series, please click here.