Preventing the Forecaster's Evaluation Dilemma

Autor: Tichy, Malte C.
Rok vydání: 2023
Předmět:
Druh dokumentu: Working Paper
Popis: Assume that a grocery item is sold 1'234 times on a given day. What should an ideal forecast have predicted for such a well-selling item, on average? More generally, when considering a given outcome value, should the empirical average of forecasted expectation values for that outcome ideally match it? Many people will intuitively answer the first question with "1'234, of course", and affirm the second. Perhaps surprisingly, such grouping of data by outcome induces a bias in the evaluation. An evaluation procedure that is aimed at verifying the absence of bias across velocities, when based on such segregation by outcome, therefore fools forecast evaluators and incentivizes forecasters to produce overly exaggerated (extreme) forecasts. Such anticipatory adjustments jeopardize forecast calibration and clearly worsen the forecast quality - this problem was named the "Forecaster's Dilemma" by Lerch et al. in 2017 (Statististical Science 32, 106). As a solution to check for bias across velocities, forecast evaluators should group pairs of forecasts and outcomes by the predicted values, and evaluate empirical mean outcomes per prediction bucket. Within a simple mathematical treatment for the number of items sold in a supermarket, the reader is walked through the dilemma and its circumvention.
Comment: 6 pages
Databáze: arXiv