How You Can Reduce Bias in Forecasts Using Waterfall Charts
If we examine demand forecasts, like the chart above, without having domain expertise or knowledge about the context of the forecasting problem, we still can explore the data patterns for sources of uncertainty. As forecasters, we can never eliminate uncertainty, but we can perhaps tame uncertainty by investigating forecasting errors for bias. An error range of approximately -7% to +10% (presumably 95% range) is shown on the chart above. Can we improve on this by examining biases? In this chart, it is evident that the seasonal peaks in the fitted model fall below the actuals. Could that be true for the forecast period too? To get insight into bias in forecast periods, we can create waterfall charts populated with holdout samples.
Creating Waterfall Charts for Handling Bias Problems
Models are designed to create unbiased forecasts. This means the overall bias is zero on average. However, that does not imply that every period in unbiased on average. By creating waterfall charts of forecast errors with holdout periods, we can gain insight into the impact of bias on forecasting performance. The first waterfall chart shows model forecasts in a holdout sample for unit sales (ACTUALS).
From this we calculate, in the second waterfall chart, the forecast errors in the holdout sample. Because the Mean Error (ME) and Median Error (MdE) are so close, we find no evidence of unusual values in the bias. The Absolute Percentage Errors (APE) are calculated in the third waterfall chart to give an indication of precision of the forecasts. Overall forecast accuracy is a function of both the bias and the precision.
In a another client example with weekly retail data, I found that the demand variability is comprised of about 20% seasonality using a periodicity of 13 (rather than 52, for detecting a weekly seasonality within quarters). Hence, the bias shown by the MdE can be used to adjust the forecasts for the same periods in the next quarter. For example, one might adjust the forecast for Week 3 down by 50 units to compensate for the overforecast of 50 units. Likewise, Weeks 6 and 10 was adjusted down by 15 and 68 units, respectively. Similarly, for the underforecasted weeks, but not without collaboration with marketing domain experts about possible root causes. Small adjustments should be avoided as all forecasts are uncertain.
The APE calculations in the waterfall chart can suggest which periods have more reliable forecasts than other weeks. Weeks 7 and 10 have low Median Absolute Percentage Error (MdAPE) estimates, while weeks 6 and 8 might need closer scrutiny for a root cause analysis.
In practice, these waterfall charts should be updated and maintained on an continuous basis to validate observations about bias and precision when forecasting with uncertainty.
This material on forecast accuracy measurement is explained in more detail in my book Change&Chance Embraced, Achieve Agility with Demand Forecasting in the Supply Chain. and workshop manual Smarter Forecasting and Planning, both available through Amazon in paperback and ebook version (for tablet and smartphone readers).