### How to Measure Leadtime Demand Accuracy for Forecast Profiles – An Information-theoretic Approach

Demand forecasting and performance evaluation in today’s disrupted **consumer demand-driven supply chain environment** has become an extremely challenging discipline for business planners to master. For instance, current forecasting performance metrics for intermittent demand have **shortcoming**s that are not easily overcome. In particular, the widely used **Mean Absolute Percentage Error** (MAPE) metric is unusable in this context when no demand is encountered. To embrace more agile and reliable approaches, I will consider a new metric for multi-step ahead forecasts with fixed horizons (e.g. **Leadtime demand forecasts)** by using relative entropy measures from information theory.

Before dealing with intermittent demand forecasting applications, I will examine a lead-time demand forecasting example without intermittency first. The approach will be equally applicable. I have added the notation in the spreadsheet, so you can follow this with your own data.

In the dataset, the holdout data (in *italics*) are shown in the row for year 2016. A fiscal, budget or planning year does not necessarily have to start in January, so lead-time or forecasting cycle totals are shown also, as we will be using them in our calculations. For a twelve-month **holdout sample**, we have created three forecasts by judgment, method and model:

For a judgment forecast, we will assume last year actuals as the forecast for the next year. This forecast profile is **Year-1** FP. For a method, we use the constant **MAVG_12** FP, which is simply the average of previous 12-month history. The **MAVG12 method** has a level profile.

The model forecast is based on the ETS (A,A,M) which has a local level, multiplicative seasonal forecast profile and is a State Space forecasting model described in **Chapter 8** of **my book**. The model was selected in automatic mode since the profile is a deterministic trend/seasonal profile.

We start by defining a multi-step ahead forecast with a fixed horizon (e.g. lead-time forecast) as a *F**orecast** P**rofile*** (FP)**. A forecast profile can be created by a model, a method, or informed judgment. The forecast profile of a *m*-step-ahead forecasts makes up a sequence **FP** = { FP(1), FP(2), . . . , FP(*m*) }. For our example, we will assume that the forecasts are monthly starting at time t = T and ending at time t = T + *m, *where* m *= 12. Typically, lead-times could be 2 months to 6 months or more, the time for an order to reach an inventory warehouse. For operational and budget planning, the time horizon might be 12 months. This 12-month pattern is called a *F**orecast **P**rofile*.

At the end of the forecast horizon or planning cycle, there is a corresponding *A**ctual **P**rofile* **AP** ={ AP(1), AP(2), . . . , AP(*m*) } to compare with **FP **for an accuracy assessment. Familiar methods include the **M**ean **A**bsolute **P**ercentage **E**rror (MAPE). The problem, in case of intermittent demand, is that some AP values can be zero, which leads to undefined terms in the MAPE.

We are going to evaluate forecasting performance by considering the information in the Forecast Profile using information-theoretic measures. Then we can assess how the forecast profile diverges from the pattern or profile in the actuals.

### Coding a Profile Into an *Alphabet *Profile

For a given Profile, the *Alphabet Profile* is a set of weights whose sum is one. That is, a *F**orecast **A**lphabet **P**rofile* is **FAP** = {f1, f2, . . . fm ], where fi = *FP* (i)/ . In other words, we simply divide each forecast by the sum of the forecasts over the horizon. This will give us fractions whose sum equals one. The *Actual Alphabet Profile* is **AAP** = {a1, a2, . . . am], where ai = . .When we compare a profile with the corresponding alphabet profile, we note that the pattern does not change.

An Information-theoretic Accuracy Measure of *Forecast* *Profile* Performance

The performance of the process that created the Forecast Profile is of interest because we will come up with a ‘distance’ metric between the Forecast Profile and Actual Profile that has its basis in Information Theory. We notice in the graphs that the *alphabet *profilehas the same pattern as the corresponding *source* profile from which it was created. The alphabet profile is necessary in the construction of the ‘distance’ metric.

The *relative entropy* or *divergence* measure is used as a performance measure in various applications in meteorology, neuroscience and machine learning. I use it here as a measure of how one forecast profile diverges from another profile. A performance measure for the forecast alphabet profile (FAP) is given by a divergence measure

This can be interpreted as a measure of dissimilarity or ‘distance’ between the actual alphabet **AAP** and forecast alphabet **FAP** *profiles*. The measure, known as the Kullback-Leibler divergence, is non-negative and is equal to zero if and only if ai = fi (I = 1,2, . . *m*). This happens when the alphabet profiles overlap, or what we might consider as 100% accuracy.

### A Decomposition of the Divergence Measure D(AAP, FAP)

The MAVG_12 **FP** is level, as is the Naïve_1 (NF1) **FP** and simple exponential smoothing (SES) **FP**. In these cases,

where *m* = 12. The first term is known as the *entropy* H of AAP and is a measure of the information in AAP. Hence, D(AAP, FAP) = ln (12) – H (AAP), in this example.

For a *single* forecast, Actual (A) minus Forecast (F) is the accuracy of the forecast, so D(AAP, FAP) can be viewed as a *measure of accuracy* for a single Actual (AP) and Forecast (FP) *profile*. That is how I propose **D(AAP, FAP)** can be used for assessing the performance of multi-step-ahead and lead-time demand forecasts.

I am unaware of where this accuracy measure has been previously used in the forecasting literature or used in demand forecasting software or applications. Someone might wish to bring this to my attention.

In a previous blog on my **website** and a LinkedIn article, I gave a spreadsheet example of the *F**orecast **P**rofile **P**erformance* (**FPP**) index to measure intermittent demand forecast accuracy. I invite you to join the LinkedIn groups I manage and share your thoughts and practical experiences with intermittent data and smarter demand forecasting in the supply chain. Feel free to send me the details of your findings, including the underlying data used without identifying descriptions, so I can continue to test and validate the process for intermittent data.

### 1 Comment

Comments are closed

[…] Croston methods inappropriate. I use information-theoretic concepts, like KL Divergence, in this new approach to intermittent demand […]