Get the latest delivered to your inbox
Privacy Policy

Now Reading

What Caused the Gulf Oil Spill?

Failed leadership and decision-making biases, just like every other manmade catastrophe

What Caused the Gulf Oil Spill?

Failed leadership and decision-making biases, just like every other manmade catastrophe

Published 08-19-10

Submitted by Behavioral Science Technology (BST)

The following article was written by Thomas R. Krause, co-founder and Chairman of the Board of BST, a leading safety performance consulting firm based in Ojai, California.

A failure of leadership led to the April 20, 2010 explosion of the Deepwater Horizon oil rig, killing 11 workers and releasing millions of barrels of oil into the Gulf of Mexico. Specifically, senior executive leaders at BP and its contractors Halliburton and Transocean failed to have the vision, will and culture of safety necessary to assure adequate levels of operational safety and reliability. In addition, leadership in the federal government failed to execute proper oversight and assure adequate emergency response planning.

Both private sector and government leaders failed to understand that: 1) Capabilities in human sciences are lagging behind technical advances; 2) Leadership, organizational culture formation and behavioral reliability are essential to consistently safe operations; 3) Behavioral reliability must be managed; 4) Leading indicators of safe or unsafe operations must be tracked and acted upon; and 5) Organizations should understand how employee safety and process safety are related, to avoid relying on low employee injury rates as a critical indicator of overall operational safety.

The investigations are ongoing, so how do we already know that these factors are the causes? Because the areas listed above and the failure of leadership to manage such items have been overwhelmingly present as causative factors in virtually every manmade catastrophe over the past 50 years.

If we know what the causes are, why do such accidents happen and why are they likely to continue (unless there is a dramatic change in leadership mindset and performance)?

The explosion, like virtually all catastrophic events, is a highly unlikely outcome. Decision makers at all levels know that redundant safety systems are in place, and allowing one or even several of these systems to be compromised still leaves the odds in favor of a safe outcome. That is until a unique set of variables is perfectly aligned, creating a perfect storm.

What Went Wrong?

The failures described above can be traced to the unseen effects of cognitive bias, which is the human tendency to base decision-making on cognitive shortcuts, such as recent experience, individual biases and rules of thumb - instead of on the clear-headed analysis of data on risk.

More than a dozen specific types of cognitive bias have been identified and researched. These include biases related to sunk costs, failure to see the baseline, recent vs. long-term experience, and confirmation bias (placing undue weight on data that support the outcome one expects). To make matters worse, these biases operate, for the most part, below the level of awareness.

Cognitive shortcuts simplify decision-making and have the benefit of being correct most of the time. "Most of the time" is good enough for most day-to-day decisions, but it is reliable only in that it produces predictable errors. The effective management of high-hazard technology requires very high levels of reliability at the decision-making and behavioral level.

The "boiled frog syndrome" illustrates the effects of cognitive bias. The adage goes that if a frog is placed in boiling water it will jump out, but if it is placed in water that gradually warms, it will be killed. (This is admittedly a metaphor, not a scientific fact.)

The fact is as many as 50,000 oil wells have been drilled in the Gulf of Mexico over the past 40 years. In any given year, as many as 4,000 oil and gas platforms are operating in the Gulf, according to the National Oceanic and Atmospheric Administration. Given that there have been no serious explosions in the Gulf since 1979 (off the coast of Mexico), and much operational and financial success, it is not hard to see how the frog got boiled - the recency bias in action.

In low-risk situations, recency bias is an acceptable substitute for knowledge - knowledge that it is a bad idea to ignore procedures that an organization has worked so hard to develop, test, train people on and enforce. But where the risks are of a high-consequence nature, the door is now open to potential catastrophe.

NASA had a similar experience with its Space Shuttle program. For years, NASA engineers knew of the potential damage to the shuttle's thermal insulating tiles during launch, and they classified it as a risk that would stop the launch. But NASA gradually became accustomed to stretching its compliance with this standard, and the shuttle was flown successfully multiple times. Gradually, it became acceptable for the organization to operate outside of its own rules - with tragic results, of course, for the seven astronauts who perished when Columbia exploded when it tried to return home on February 1, 2003.

Overcoming cognitive bias

The simple answer to dealing with these issues is training, which is readily available and worthwhile for increasing leadership's awareness of cognitive biases and their consequences. The harder part is addressing the climate and culture of an organization, which may or may not welcome the examination of biases.

In the case of NASA, our assessment after the Columbia tragedy found that many of their team meetings were not conducive to open and productive communication. Often, disagreements were "unsafe" in that someone would win (and be celebrated) and someone would lose (and be marginalized) from the discussion, even if everyone was raising valid points. Subsequently, leadership at NASA took on the task of creating a more open environment that encourages communication and values dialogue about disagreement. Progress was measured via surveys and observations, and positive results were evident in the first year.

The BP catastrophe illustrates the need for senior-most leaders in both government and industry to have a new vision for safety, a new blueprint. It should integrate both process and employee safety in a culture of safety, and there needs to be a concerted effort to make sure human capabilities and technical advances are synchronized. It should encourage open discussion about the frequency, probability and severity of potential incidents. Where there are disconnects, the biases and assumptions must be closely examined and resolved so that "most of the time" and "good enough" do not figure into the decision-making process for high-consequence situations.

To avoid cognitive biases, oil drilling should be done only if leading indicator data meet pre-determined levels, which are monitored within the organization and by outside regulators. Other industries with the potential for low-frequency, high-consequence events should adopt a similar stance. Before they are required to do so. Before cognitive bias causes yet another catastrophe.

About the author:

Thomas R. Krause, Ph.D., is Chairman of the Board of BST, a leading safety performance consulting firm based in Ojai, California, whose clients include several major oil companies, NASA, hundreds of manufacturers worldwide, and patient safety-focused health care organizations. For more information, contact Dr. Krause at tom.krause@bstsolutions.com, or visit www.bstsolutions.com.

Behavioral Science Technology (BST) logo

Behavioral Science Technology (BST)

Behavioral Science Technology (BST)

Behavioral Science Technology (BST)

More from Behavioral Science Technology (BST)

Join today and get the latest delivered to your inbox