The Failure of Risk Management. Douglas W. Hubbard
Чтение книги онлайн.

Читать онлайн книгу The Failure of Risk Management - Douglas W. Hubbard страница 9

Название: The Failure of Risk Management

Автор: Douglas W. Hubbard

Издательство: John Wiley & Sons Limited

Жанр: Ценные бумаги, инвестиции

Серия:

isbn: 9781119522041

isbn:

СКАЧАТЬ methods, especially when those methods are meant to help direct and protect major investments and inform key public policy. The application of healthy skepticism to risk management methods was long past due when I wrote the first edition of this book more than a decade ago.

       Deepwater Horizon offshore oil spill (2010)

       Fukushima Daiichi nuclear disaster (2011)

       Flint Michigan water system contamination (starting 2012)

       Samsung Galaxy Note 7 battery failures (2016)

       Multiple large data breaches (Equifax, Anthem, Target, etc.)

       Amtrak derailments/collisions (2018)

      Events such as these and other natural, geopolitical, technological, and financial disasters in the beginning of the twenty-first century periodically accelerate (maybe only temporarily) interest in risk management among the public, businesses, and lawmakers. This continues to spur the development of several risk management methods.

      The methods to determine risks vary greatly among organizations. Some of these methods—used to assess and mitigate risks of all sorts and sizes—are recent additions in the history of risk management and are growing in popularity. Some are well-established and highly regarded. Some take a very soft, qualitative approach and others are rigorously quantitative. If some of these are better, if some are fundamentally flawed, then we should want to know.

      Actually, there is very convincing evidence about the effectiveness of different methods and this evidence is not just anecdotal. As we will see in this book, this evidence is based on detailed measurements in large controlled experiments. Some points about what works are even based on mathematical proofs. This will all be reviewed in much detail but, for now, I will skip ahead to the conclusion. Unfortunately, it is not good news.

      I will make the case that most of the widely used methods are not based on any proven theories of risk analysis, and there is no real, scientific evidence that they result in a measurable improvement in decisions to manage risks. Where scientific data does exist, the data show that many of these methods fail to account for known sources of error in the analysis of risk or, worse yet, add error of their own.

      A more typical circumstance, however, is that the risk management method itself has no performance measures at all, even in the most diligent, metrics-oriented organizations. This widespread inability to make the sometimes-difficult differentiation between methods that work and methods that don't work means that ineffectual methods are likely to spread. Once certain methods are adopted, institutional inertia cements them in place with the assistance of standards and vendors that refer to them as “best practices.” Sometimes they are even codified into law. Like a dangerous virus with a long incubation period, methods are passed from company to company with no early indicators of ill effects until it's too late.

      The consequences of flawed but widely adopted methods are inevitably severe for organizations making critical decisions. Decisions regarding not only the financial security of a business but also the entire economy and even human lives are supported in large part by our assessment and management of risks. The reader may already start to see the answer to the first question at the beginning of this chapter, “What is your biggest risk?”

      The year 2017 was remarkable for safety in commercial air travel. There was not a single fatality worldwide from an accident. Air travel had already been the safest form of travel for decades. Even so, luck had some part to play in the 2017 record, but that luck would not last. That same year, a new variation of the Boeing 737 MAX series passenger aircraft was introduced: the 737 MAX 8. Within twelve months of the initial roll out, well over one hundred MAX 8s were in service.

      In 2018 and 2019, two crashes with the MAX 8, totaling 339 fatalities, showed that a particular category of failure was still very possible in air travel. Although the details of the two 737 crashes were still emerging as this book was written, it appears that it is an example of a common mode failure. In other words, the two crashes may be linked to the same cause. This is a term familiar to systems risk analysis in some areas of engineering, where several failures can have a common cause. This would be like a weak link in a chain, but where the weak link was part of multiple chains.

      I had an indirect connection to another common mode failure in air travel forty years before this book came out. In July 1989, I was the commander of the Army Reserve unit in Sioux City, Iowa. It was the first day of our two-week annual training and I had already left for Fort McCoy, Wisconsin with a small group of support staff. The convoy of the rest of the unit was going to leave that afternoon, about five hours behind us. But just before the main body was ready to leave for annual training, the rest of my unit was deployed for a major local emergency.

      United Airlines flight 232 to Philadelphia was being redirected to the small Sioux City airport because of serious mechanical difficulties. It crashed, killing 111 passengers and crew. Fortunately, the large number of emergency workers available and the heroic airmanship of the crew helped make it possible to save 185 onboard. Most of my unit spent the first day of our annual training collecting the dead from the tarmac and the nearby cornfields.

      During the flight, the DC-10's tail-mounted engine failed catastrophically, causing the fast-spinning turbine blades to fly out like shrapnel in all directions. The debris from the turbine managed to cut the lines to all three redundant hydraulic systems, making the aircraft nearly uncontrollable. Although the crew was able to guide the aircraft in the direction of the airport by varying the thrust to the two remaining wing-mounted engines, the lack of tail control made a normal landing impossible.

      Now consider that the cracks in the turbine blades of the DC-10 would have been detected except for what the National Transportation Safety Board (NTSB) called “inadequate consideration given to human factors” in the turbine blade inspection process. Is human error more likely than one in a billion? Absolutely. And human error in large complex software systems like those used on the 737 MAX 8 is almost inevitable СКАЧАТЬ