The Failure of Risk Management. Douglas W. Hubbard
Чтение книги онлайн.

Читать онлайн книгу The Failure of Risk Management - Douglas W. Hubbard страница 20

Название: The Failure of Risk Management

Автор: Douglas W. Hubbard

Издательство: John Wiley & Sons Limited

Жанр: Ценные бумаги, инвестиции

Серия:

isbn: 9781119522041

isbn:

СКАЧАТЬ that the standards I suggest for evaluating risk management are unfair and they will still argue that their risk management program was a success. When asked for specifics about the evidence of success, I find they will produce an interesting array of defenses for a method they currently use in risk management. However, among these defenses will be quite a few things that do not constitute evidence that a particular method is working. I have reason to believe that these defenses are common, not only because I've heard them frequently but also because many were cited as benefits of risk management in the surveys by Aon, The Economist, and Protiviti.

       When asked, the managers will say that the other stakeholders involved in the process will claim that the effort was a success. They may even have conducted a formal internal survey. But, as the previous studies show, self-assessments are not reliable. Furthermore, without an independent, objective measure of risk management, the perception of any success may merely be a kind of placebo effect. That is, they might feel better about their situation just by virtue of the fact that they perceive they are doing something about it.

       The proponents of the method will point out that the method was “structured.” There are a lot of structured methods that are proven not to work. (Astrology, for example, is structured.)

       Often, a “change in culture” is cited as a key benefit of risk management. This, by itself, is not an objective of risk management—even though some of the risk management surveys show that risk managers considered it to be one of the main benefits of the risk management effort. But does the type of change matter? Does it matter if the culture doesn't really lead to reduced risks or measurably better decisions?

       The proponents will argue that the method “helped to build consensus.” This is a curiously common response, as if the consensus itself were the goal and not actually better analysis and management of risks. An exercise that builds consensus to go down a completely disastrous path probably ensures only that the organization goes down the wrong path even faster.

       The proponents will claim that the underlying theory is mathematically proven. I find that most of the time, when this claim is used, the person claiming this cannot actually produce or explain the mathematical proof, nor can the person he or she heard it from. In many cases, it appears to be something passed on without question. Even if the method is based on a widely recognized theory, such as options theory (for which the creators were awarded the Nobel Prize in 1997) or modern portfolio theory (the Nobel Prize in 1990), it is very common for mathematically sound methods to be misapplied. (And those famous methods themselves have some important shortcomings that all risk managers should know about.)

       The vendor of the method will claim that the mere fact that other organizations bought it, and resorted to one or more of the preceding arguments, is proof that it worked. I call this the testimonial proof. But if the previous users of the method evaluated it using criteria no better than those previously listed, then the testimonial is not evidence of effectiveness.

       The final and most desperate defense is the claim, “But at least we are doing something.” I'm amazed at how often I hear this, as if it were irrelevant whether the “something” makes things better or worse. Imagine a patient complains of an earache and a doctor, unable to solve the problem, begins to saw off the patient's foot. “At least I am doing something,” the doctor says in defense.

      With some exceptions (e.g., insurance, some financial management, etc.), risk management is not an evolved profession with standardized certification requirements and methods originally developed with rigorous scientific testing or mathematical proofs. So we can't be certain that everyone answering the surveys identified in chapter 2 is really using a valid standard to rate his or her success. But even if risk managers had some uniform type of professional quality assurance, surveys of risk managers would still not be a valid measure of risk management effectiveness. That would be like measuring the effectiveness of aspirin by a survey of family practice doctors instead of a clinical trial. What we need are objective measures of the success of risk management.

      Recall from chapter 1 that risk can be measured by the probability of an event and its severity. If we get to watch an event over a long period of time then we could say something about how frequent the event is and the range of possible impacts. If a large retailer is trying to reduce the risk of loss due to shoplifting (an event that may occur more than a hundred times per month per store), then one inventory before the improved security efforts and another a month after would suffice to detect a change. But a risk manager isn't usually concerned with very high-frequency and low-cost events such as shoplifting.

      In a retailer such as Target or Walmart, theft should be so common that it becomes more of a fully anticipated cost than a risk. Similarly, the “risks” of running out of 60W incandescent bulbs or mislabeling a price on a single item are, correctly, not usually the types of risks we think of as foremost in the minds of risk managers. The biggest risks tend to be those things that are more rare but potentially disastrous—perhaps even events that have not yet occurred in this organization.

      If it is a rare event (such as many of the more serious risks organizations would hope to model) then we need a very long period of time to observe how frequent and impactful the event may be—given we can survive long enough after observing enough of these events. Suppose, for example, a major initiative is undertaken by the retailer's IT department to make point-of-sale and inventory management systems more reliable. If the chance of these systems being down for an hour or more were reduced from 10 percent per year to 5 percent per year, how would they know just by looking at the first year? And if they did happen to observe one event and the estimated cost of that event was $5 million, how do we use that to estimate the range of possible losses?

       The big experiment

       Direct evidence of cause and effect

       Component testing

       Formal errors

       A check of completeness

       Answering the right question

      The Big Experiment

      The most convincing way—and the hardest way—to measure the effectiveness of risk management is with a large-scale experiment over a long period tracking dozens or hundreds of organizations. This is still time-consuming—for example, waiting for the risk event to occur in your own organization—but it has the advantage of looking at a larger population of firms in a formal study. If risk management is supposed to, for example, reduce the risk of events that are so rare that actual results alone would be СКАЧАТЬ