Improved Framework for Measuring the Magnitude and Impact of Biases in Project Evaluation

Autor: Malik K. Alarfaj, Duane A. McVay
Rok vydání: 2016
Předmět:
Zdroj: Day 2 Tue, September 27, 2016.
Popis: Several authors over several decades (Capen 1976; Brashear et al. 2001; Rose 2004) have observed that industry performance has been consistently below expectations. While this is painfully obvious during the current industry downturn, available evidence suggests that even when the industry is profitable, e.g., during the decade prior to the most recent downturn, it still performs substantially below its expectations and its potential (Nandurdikar 2014). Many attribute this underperformance to cognitive biases in project evaluation, resulting in poor project selection and valuation. McVay and Dossary (2014) presented a simplified framework to estimate the cost of underestimating uncertainty. They demonstrated that chronic overconfidence and optimism (estimated distributions of project value too narrow and shifted positively), common in industry, produce substantial disappointment (difference between estimated and realized portfolio values), also common in industry. In this work, we generalized their framework to include full estimated distributions (e.g., normal or lognormal), instead of the truncated distributions they employed. In addition, we extended their framework to model underconfidence (estimated distributions too wide), and demonstrate that underconfidence is just as detrimental to portfolio performance as overconfidence. Decision error will be minimized and portfolio value will be maximized when there is no bias in project estimation—i.e., neither overconfidence nor underconfidence and neither optimism nor pessimism. Using either framework, we demonstrate that operators can quantitatively measure biases—overconfidence, underconfidence, optimism and pessimism—from lookbacks (comparing actual performance to probabilistic forecasts) and generation of calibration plots. Once aware of the direction and magnitude of biases, operators have means for eliminating these biases in subsequent forecasts through a combination of internal correction of uncertainty assessments, via training or ongoing feedback, and external correction of forecasts using measurements of bias from calibration results.
Databáze: OpenAIRE