In the year that has passed since the Deepwater Horizon oil rig exploded and sank to the ocean floor last April, the Gulf of Mexico oil spill has generated a flurry of divergent opinions about risk and risk management. Much of the debate has been dominated by two views. One emphasizes the nature of the spill itself; the disaster was an unforseeable “black swan” event. Another view emphasizes the nature of the protagonists; BP and the Minerals Management Service were uniquely incompetent “black sheep” organizations. These views are very different but share a common implication that the Gulf oil spill was unique. Both views, however, are incorrect. A third perspective should be considered that sees neither the risk nor the risk management in this case as unique. This view is both more constructive and more discomfiting. What we have is neither a rare black swan nor a rare black sheep, but a common seagull covered in crude oil. And as such, we have a valuable lesson in risk management.
Good risk management begins with good risk assessment; that is, understanding the likelihood and magnitude of uncertain events. Unfortunately, current risk management practice has remarkable difficulty assessing unusual events. By unusual, we mean events that occur infrequently, even when looking broadly across time and location. This limitation is unfortunate because unusual, but potentially significant, events are precisely the kind that require the most attention. They are also where appropriate attention can produce the greatest benefit.
The discipline is currently dominated by the view that risk can only be formally assessed — meaning, quantified — if there is a large amount of repetitive, historical data behind it. This results in a clear dichotomy. Where there is sufficient data, risks are quantified. Where there is insufficient data, risks are treated qualitatively. Typically, this means that everyday risks are quantified and unusual risks are not. This dichotomy is illustrated very clearly in financial risk management. Everyday risks are quantified extensively using statistical approaches such as value at risk (VaR). Unusual risks are addressed qualitatively with heuristics such as stress testing.
How is this difficulty revealed in the Gulf oil spill? Significant oil spills are fortunately not everyday occurrences, but they certainly are not unprecedented. Most lists of oil spills over the past several decades include dozens or more. Most are considerably smaller in size than the Gulf oil spill, many are comparable in size, and some are even larger. Perhaps most notable is the 1979 Ixtoc well blowout in the Gulf of Mexico that resulted in an oil spill not dissimilar in size to the recent event. Based on this history, the Gulf oil spill appears to fit our working definition of an unusual event very well. It is not a black swan, an event so rare that, as the man who popularized the term, author Nassim Nicholas Taleb, wrote “nothing in the past can convincingly point to its possibility.” On the other hand, it is not an everyday event with a wealth of relevant historical data.
Looking back, how was the risk of this unusual event assessed? The Minerals Management Service (MMS) has a system for dealing with oil spills, including formal methods for estimating their likelihood and impact. However, these methods are built on repetitive historical data and are largely limited to common incidents. We find no evidence that MMS quantified the risk of a very large spill from the Macondo well, and MMS regulations do not require such quantification.
Instead, when it comes to such unusual events, MMS relies on qualitative thinking. It requires developers to specify a worst-case scenario and provide detailed instructions on how to respond. Consistent with standard practice, they do not require that this scenario be associated with a probability. Based on its worst-case submission, BP was aware of the possibility of a spill of tens of thousands of barrels a day or more. But we are unable to find evidence that BP formally assigned probabilities to spills of this size as part of the oversight process.
Dealing with Unusual Events
Some have argued that protagonists in the Gulf oil spill underestimated the risk. The view here is not so much that they underestimated it, but that they adopted standard practice in dealing with unusual events and did not really estimate it at all.
The downside of this qualitative approach to unusual events is considerable. First, with only a qualitative assessment, risk communication is significantly impaired. When we are talking about a worst-case event, do we mean 1/100, 1/10,000 or even 1/1,000,000? Without quantification, confusion and misunderstanding can result. Furthermore, it is difficult to evaluate different preparations and responses effectively. The appropriate preparation for a 1/100 worst case, for example, may be quite different from the appropriate response to a 1/1,000,000 worst case — a distinction that is lost when the assessment is qualitative. While the intent of this qualitative worst-case assessment is to minimize the chance of a disaster, it can have the opposite effect.
To address this shortcoming, we recommend taking a different, Bayesian view. With a Bayesian view, risks are assessed by combining available data with expert judgment, which means that a statistically significant data set is not necessarily required. This Bayesian view centers on combining data and judgment in a formal, quantitative probability and impact assessment. It is quite common in strategic and operational settings. It has penetrated risk management in spots, yet it remains far from standard practice. But it certainly has a broad application for the discipline, something we can see by looking at the hypothetical likelihood and magnitude of an oil import disruption, another unusual but not unprecedented oil-related event of great importance.
Think of all the possible individual risk factors that could contribute to the possibility of an oil disruption. Some, such as oil production expectations, are data rich. Other risk factors, such as Middle East conflict, are dominated by judgment. Throw in unknowns like drilling shortfalls, Somali piracy, pipeline ruptures and individual oil company operating issues, among many others, and you have a far-reaching minefield of possible perils — each with their own correlation or dependence with one another.
The Bayesian view requires analyzing the likelihoods and magnitudes these factors all present from both a qualitative and a quantitative view depending on what data and expert knowledge is available. Such a rigorous risk assessment builds on the available historical data and available judgment to produce a quantitative assessment of an important unusual event.
Matching Responses to Impacts
Risk management is only as good as the individual preparations and responses that are adopted because of it. Unfortunately, current practice does not devote sufficient time to elucidating the risks fully, uncovering the breadth of potential impacts and crafting responses to match those impacts. Instead, current practice relies heavily on off-the-shelf alternatives for addressing everyday risks and is ill-suited to mitigating unusual events.
Off-the-shelf preparations and responses are widely available and generally effective for many of the everyday risks organizations face today. For financial risks, there are popular market hedges. For hazard risks, there are common insurance policies. For operational risks, there are accepted compliance guidelines. Even for strategic risks, there are industry norms.
In current risk management practice, much of the effort is devoted simply to assigning these off-the-shelf alternatives to risks based on what is acceptable, prudent or required. In contrast, there is no systematic process for fully identifying potential impacts and crafting alternatives for those specific impacts, and many risk management alternatives are missed as a result.
How is this off-the-shelf orientation revealed in the Gulf oil spill? The Macondo well undoubtedly had individual characteristics, but BP?s projected impacts and planned responses were largely off-the-shelf. The Gulf of Mexico oil spill response plans for all the major firms were similar — or identical — in many respects and applied broadly to all wells in the region. As events unfolded, responses had to be tailored to the unique aspects of the spill on the fly. “Each of the oil companies’ oil-spill response plans are practically identical to the tragically flawed BP oil-spill response plan,” stated House Energy and Commerce Committee Reps. Henry Waxman (D-CA), Ed Markey (D-MA) and Bart Stupak (D-MI) in a letter to the heads of ExxonMobil, ConocoPhillips, Shell and Chevron last June. “No oil company appears to be better prepared for a disastrous oil spill than BP was.”
In many circumstances, a standardized approach to risk management may be fine. Everyday risks that have common features across time and location often call for such off-the-shelf alternatives. For example, most of us would find an off-the-shelf insurance policy acceptable or even preferable. For unusual risks, however, this one-size-fits-all philosophy is likely to lead to problems and missed opportunities. The downside can be excessive cost, excessive risk or both.
In dealing with unusual risks, we recommend taking a different, decision analysis approach to generating and evaluating customized alternatives to match the impacts in question. With this approach, rigor is applied not just to the assessment aspect of risk management but to the actual management aspect — the decisions that are actually made to prepare for or respond to risk.
Like the Bayesian approach, decision analysis has become quite common in strategic and operational circles. George Kirklan, vice chairman of Chevron, which recently won the Decision Analysis Society Practice Award, has said that “decision analysis is a part of how Chevron does business for a simple, but powerful, reason: it works.” Like the Bayesian approach, however, it has not penetrated corporate risk management particularly well.
Another oil-related risk example, tanker safety, shows how decision analysis can be applied. The first step is generating risk management alternatives based on their ability to achieve the specific objectives of stakeholders. If the objective is improving the safety culture aboard the ship, for instance, the operators can start by looking at what the impediments are. Perhaps there is poor record-keeping in regards to whether the root cause of an injury is a human factor or a less-than-ideal workplace condition. The process to identify safety concerns will then stand out as lacking. Perhaps there is poor safety-related feedback from employees and little priority placed on safety by supervisors.
If the risk managers create a decision tree incorporating all these issues and how they constrain the objective of improving the safety culture aboard the tanker, the ways that they all interconnect will become clearer. And once it is determined which of the issues (identification, feedback, prioritization, etc.) are the biggest contributing factors to the cultural deficiency, the company will be in a better place to make decisions that will help reduce its safety-related risks.
Improving Systematic Learning
Like many management activities, risk management takes place in cycles where increased understanding, better performance and higher quality can result over time. Unfortunately, there is little systematic learning. By systematic learning, we mean a structured process for explicitly comparing what was anticipated to what was realized, and correcting both assessments and actions as a result.
What does the Gulf oil spill reveal about this lack of systematic learning? BP has been cited over decades as a “learning organization” and even established a joint program with MIT in 2008 focusing on operations safety “designed to enhance the culture of continuous improvement at BP.” Last fall, BP’s chief executive officer, Tony Hayward, even launched a three-day Operations Academy Executive Program, designed to educate top management on how to lead BP to operations excellence in an enterprise that employs more than 96,000 people in over 100 countries. BP has also experienced its share of oil spills, refinery fires and other unfortunate events that could be seen as learning opportunities.
But here, as we have seen in many organizations, there seems to be a disconnect between the enthusiasm for learning and the actual improvement in key aspects of risk management. While the concept of learning seems particularly relevant when it comes to risk assessment and risk management, the great bulk of the literature in this field has very little to say on the topic and no concrete recommendations on how to improve, refine and adapt risk assessment and management over time.
The downside is that the quality of risk management does not improve significantly over time. Each cycle of risk assessment and management starts largely from square one. Risk assessments can remain uncalibrated, and overestimates or underestimates of risks remain uncorrected. Mistakes in risk responses can also be repeated, with little focus on what did or did not work in the past. Many risk managers and senior executives are concerned not just that risk management stagnates, but that it can actually decline over time as a result of fatigue. The net result is that risk management is considerably less efficient and effective than it can be.
The keys to improving learning in risk management are better defining what we mean by quality and emphasizing the importance of systematic learning. Fortunately, there are signs of emerging interest in the application of learning in risk management.
The Fifth Discipline: The Art and Practice of the Learning Organization, MIT lecturer Peter Senge’s seminal book on the learning organization, includes specific examples of firms learning from disasters. His work supports the idea that risk management can incorporate systematic learning by providing personnel with the appropriate resources — particularly training — and by adopting appropriate incentives to encourage learning behavior over time.
A water supply firm in Australia provides a good example. One of its key concerns is the impact of atmospheric carbon dioxide on water supplies. The company represents the quality of the assessment of this risk formally with a confidence interval. In the 1960s, the quality of this assessment was quite poor. There was considerable uncertainty over whether rainfall would increase, decrease or remain constant due to carbon dioxide. With careful observation and analysis of rainfall data over time, however, the quality of this assessment improved. By 2000, the confidence interval was reduced by 80%.
This is systematic learning at work. It is accomplished not just by adding new data, but by formally comparing what was anticipated to what was realized, and using the difference to refine underlying assessments and models.
Applying the Lessons
Black swan, black sheep or seagull covered in crude oil? The third view has revealed three important lessons for anyone involved in risk management.
First, the assessment of unusual events — such as the Gulf oil spill — can and should be made more rigorous. Relying purely on qualitative treatment should no longer be the only option.
Second, more formal attention should be devoted to developing and evaluating customized preparations and responses. Emphasizing only off-the-shelf alternatives means that many options are not being considered.
Lastly, systematic learning should be incorporated to ensure that risk management improves over time. At this point, mere complacency should be considered tantamount to regression.
Current risk management practices could be significantly improved through a more widespread incorporation of all three of these concepts. With such improvements, both the probabilities and impacts of future disasters could be reduced.
It is impossible to tell for sure, but it is certainly plausible that the tragic events that happened in the Gulf last April would have transpired differently if the protagonists had included such efforts in their risk management process. Careful comparison of past surprises and past risk assessments, in particular, could have shown that the level of understanding of oil spill risks and impacts was imperfect.