Project Management Journal, vol. 37, no. 3, August 2006, pp. 5-15
Abstract
The paper presents a promising new approach to mitigating such risk, based on theories of decision making under uncertainty which won the 2002 Nobel prize in economics.
- First, the paper documents inaccuracy and risk in project management.
- Second, it explains inaccuracy in terms of optimism bias and strategic misrepresentation.
- Third, the theoretical basis is presented for a promising new method called "reference class forecasting," which achieves accuracy by basing forecasts on actual performance in a reference class of comparable projects and thereby bypassing both optimism bias and strategic misrepresentation.
- Fourth, the paper presents the first instance of practical reference class forecasting, which concerns cost forecasts for large transportation infrastructure projects.
- Finally, potentials for and barriers to reference class forecasting are assessed.
Reference Class Forecasting
Reference class forecasting is based on theories of decision making under uncertainty that won Princeton psychologist Daniel Kahneman the Nobel prize in economics in 2002 (Kahneman and Tversky 1979a, b; Kahneman 1994).
Reference class forecasting promises more accuracy in forecasts by taking a so-called "outside view" on prospects being forecasted, while conventional forecasting takes an inside view.
The outside view on a given project is based on knowledge about actual performance in a reference class of comparable projects.
Inaccuracy in Forecasts
No improvement in forecasting accuracy seems to have taken place, despite all claims of improved forecasting models, better data, etc. (Flyvbjerg, Bruzelius, and Rothengatter 2003; Flyvbjerg, Holm, and Buhl 2002, 2005).
For transportation infrastructure projects inaccuracy in cost forecasts in constant prices is on average 44.7% for rail, 33.8% for bridges and tunnels, and 20.4% for roads (see Table 1).
Benefit-cost ratios are often wrong, not only by a few percent but by several factors.
Comparative studies show that transportation projects are no worse than other project types in this respect (Flyvbjerg, Bruzelius, and Rothengatter 2003).
Explaining Inaccuracy
First, if technical explanations were valid one would expect the distribution of inaccuracies to be normal or near-normal with an average near zero. Thus the problem is bias and not inaccuracy as such.
Second, if imperfect data and models were main explanations of inaccuracies, one would expect an improvement in accuracy over time
Psychological and political explanations better account for inaccurate forecasts.
Psychological explanations account for inaccuracy in terms of optimism bias
Political explanations explain inaccuracy in terms of strategic misrepresentation.
when forecasting the outcomes of projects, forecasters and managers deliberately and strategically overestimate benefits and underestimate costs in order to increase the likelihood that it is their projects, and not the competition's, that gain approval and funding.
Explanations of inaccuracy in terms of optimism bias have been developed by Kahneman and Tversky (1979a) and Lovallo and Kahneman (2003).
Explanations in terms of strategic misrepresentation have been set forth by Wachs (1989,1990) and Flyvbjerg, Holm, and Buhl (2002, 2005).
In experimental research carried out by Daniel Kahneman and others, this Reference Class Forecasting method has been demonstrated to be more accurate than conventional forecasting methods (Kahneman and Tversky 1979a, 1979 b; Kahneman 1994; Lovallo and Kahneman 2003).
The Planning Fallacy and Reference Class Forecasting
Reference class forecasting is a method for debiasing forecasts.
Lovallo and Kahneman (2003: 58) call such common behavior the "planning fallacy" and they argue that it stems from actors taking an "inside view" focusing on the constituents of the specific planned action rather than on the outcomes of similar actions that have already been completed.
Using such distributional information from other ventures similar to that being forecasted is called taking an "outside view" and it is the cure to the planning fallacy.
Reference class forecasting for a particular project requires the following three steps:
- Identification of a relevant reference class of past, similar projects. The class must be broad enough to be statistically meaningful but narrow enough to be truly comparable with the specific project.
- Establishing a probability distribution for the selected reference class. This requires access to credible, empirical data for a sufficient number of projects within the reference class to make statistically meaningful conclusions.
- Comparing the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.
In statisticians vernacular, reference class forecasting consists of regressing forecasters' best guess toward the average of the reference class and expanding their estimate of credible interval toward the corresponding interval for the class (Kahneman and Tversky 1979b: 326).
The research shows that when people are asked simple questions requiring them to take an outside view, their forecasts become significantly more accurate.
To be sure, choosing the right reference class of comparative past projects becomes more difficult when managers are forecasting initiatives for which precedents are not easily found, for instance the introduction of new and unfamiliar technologies.
Most projects are both non-routine locally and use well-known technologies. Such projects are, therefore, particularly likely to benefit from the outside view and reference class forecasting.
First Instance of Reference Class Forecasting in Practice
The first instance of reference class forecasting in practice may be found in Flyvbjerg and Cowi (2004): Procedures for Dealing with Optimism Bias in Transport Planning.
HM Treasury recommended that appraisers involved in large public procurement should make explicit, empirically based adjustments to the estimates of a project’s costs, benefits, and duration.
HM Treasury recommended that these adjustments be based on data from past projects or similar projects elsewhere, and adjusted for the unique characteristics of the project at hand.
- The uplifts refer to cost overrun calculated in constant prices.
- The lower the acceptable risk for cost overrun, the higher the uplift.
- For instance, with a willingness to accept a 50% risk for cost overrun in a road project, the required uplift for this project would be 15%.
If the initially estimated budget were £100 million, then the final budget--taking into account optimism bias at the 80%-level--would be £132 million
If the project managers or their client decided instead that a 50% risk of cost overrun was acceptable, then the uplift would be 15% and the final budget £115 million.
If there is evidence that the project managers are worse at estimating costs than their colleagues, then higher uplifts should be used.
The methodology described above for systematic, practical reference class forecasting for transportation projects was developed 2003-2004 with publication by the Department of Transport in August 2004. From this date local authorities applying for funding for transportation projects with the Department for Transport or with HM Treasury were required to take into account optimism bias by using uplifts as described above and as laid out in more detail in guidelines from the two ministries.
Forecasting Costs for the Edinburgh Tram
In October 2004, the first instance of practical use of the uplifts was recorded, in the planning of the Edinburgh Tram Line 2.
By framing the forecasting problem to allow the use of the empirical distributional information made available by the UK Department for Transport, Ove Arup was able to take an outside view on the Edinburgh Tram Line 2 capital cost forecast and thus debias what appeared to be a biased forecast.
As a result Ove Arup's client, The Scottish Parliament, was provided with a more reliable estimate of what the true costs of Line 2 was likely to be.
Potentials and Barriers for Reference Class Forecasting
Reference class forecasting may help mitigate any type of human bias, including strategic bias
Two types of explanation best account for forecasting inaccuracy, optimism bias and strategic misrepresentation.
- where optimism bias is the main cause of inaccuracy--we may assume that managers and forecasters are making honest mistakes and have an interest in improving accuracy.
- where strategic misrepresentation is the main cause of inaccuracy--differences between estimated and actual costs and benefits are best explained by political and organizational pressures.
In order to lower barriers, and thus create room for reference class forecasting, measures of accountability must be implemented that would reward accurate forecasts and punish inaccurate ones.
- Forecasters and promoters should be made to carry the full risks of their forecasts.
- Their work should be reviewed by independent bodies such as national auditors or independent analysts, and such bodies would need reference class forecasting to do their work.
- Projects with inflated benefit-cost ratios should be stopped or placed on hold.
- Professional and even criminal penalties should be considered for people who consistently produce misleading forecasts.
The higher the stakes, and the higher the level of political and organizational pressures, the more pronounced will be the need for such measures of accountability.
It could be argued that in some cases the use of reference class forecasting may result in such large reserves set aside for a project that this would in itself lead to risks of inefficiencies and overspending.
- Reserves will be spent simply because they are there
This makes it important to combine the introduction of reference class forecasting and optimism bias uplifts with tight contracts, maintained incentives for promoters to undertake good quantified risk assessment and exercise prudent cost control during project implementation.
How this may be done is described in Flyvbjerg and Cowi (2004).