The Failure of Risk Management. Douglas W. Hubbard
Чтение книги онлайн.

Читать онлайн книгу The Failure of Risk Management - Douglas W. Hubbard страница 22

Название: The Failure of Risk Management

Автор: Douglas W. Hubbard

Издательство: John Wiley & Sons Limited

Жанр: Ценные бумаги, инвестиции

Серия:

isbn: 9781119522041

isbn:

СКАЧАТЬ of estimates: In many cases several experts will be consulted for estimates, and their estimates will be aggregated in some way. We should consider the research about the relative performance of different expert-aggregation methods.

       Behavioral research into qualitative scales: If we rely on various scoring or classification methods (e.g., a scale of 1 to 5 or high/medium/low), we should consider the results of empirical research on how these methods are actually used and how much arbitrary features of the scales effect how they are used.

       Decomposition: We can look into research about how estimates can be improved by how we break up a problem into pieces and how we assess uncertainty about those pieces.

       Errors in quantitative models: If we are using more quantitative models and computer simulations, we should be aware of the most common known errors in such models. We also need to check to see whether the sources of the data in the model are based on methods that have proven track records of making realistic forecasts.

      Formal Errors

      Outright math errors should be the most obvious disqualifiers of a method, and we will find them in some cases. This isn't just a matter of making simplifying assumptions or using shortcut rules of thumb. Those can be useful as long as there is at least empirical evidence that they are helpful. But where we deviate from the math, empirical evidence is even more important. This is especially true when deviations from known mathematics provide no benefits in simplicity compared to perfectly valid mathematical solutions—which is often the main case for taking mathematical shortcuts.

      In some cases, it can be shown that mathematically irregular methods may actually lead to dangerously misguided decisions. For example, we shouldn't be adding and multiplying ordinal scales, as is done in many risk assessment methods. We will show later some formal analysis how such procedures lead to misguided conclusions.

      A Check of Completeness

      Even if we use the best methods, we can't apply them to a risk if we don't even think to identify it as a risk. If a firm thinks of risk management as “enterprise risk management,” then it ought to be considering all the major risks of the enterprise—not just legal, not just investment portfolio, not just product liability, not just worker safety, not just business continuity, not just security, and so on. This criterion is not, however, the same as saying that risk management can succeed only if all possible risks are identified. Even the most prudent organization will exclude risks that nobody could conceivably have considered.

      The surveys previously mentioned and many “formal methodologies” developed detailed taxonomies of risks to consider, and each taxonomy is different from the others. But completeness in risk management is a matter of degree. The use of a detailed taxonomy is helpful, but it is no guarantee that relevant risks will be identified.

      More important, risks should not be excluded simply because they are speaking about risks in completely different languages. For example, cyber risk, financial portfolio risk, safety risk, and project risk do not need to use fundamentally different languages when discussing risk. If project risks are 42, cyber risks are yellow, safety risks are moderate, portfolio risks have a Sharpe Ratio of 1.1, and there is a 5 percent chance a new product will fail to break even, what is the total risk? They can and should be using the same types of metrics so risks across the enterprise can be considered comprehensively.

      A risk manager should always assume that the list of considered risks, no matter how extensive, is incomplete. All we can do is increase completeness by continual assessment of risks from several angles and compare them with a common set of metrics. In part 3, we will discuss some angles to consider when developing a taxonomy in the hope that it might help the reader think of previously excluded risks.

      Answering the Right Question

      The first and simplest test of a risk management method is determining if it answers the relevant question, “Where and how much do we reduce risk and at what cost?” A method that answers this, explicitly and specifically, passes this test. If a method leaves this question open, it does not pass the test—and many will not pass.

      Relevant risk management should be based on risk assessment that ultimately follows through to explicit recommendations on decisions. Should an organization spend $2 million to reduce its second largest risk x by half, or spend the same amount to eliminate three risks that aren't in the top five biggest risks? Ideally, risk mitigation can be evaluated as a kind of “return on mitigation” so that different mitigation strategies of different costs can be prioritized explicitly. Merely knowing that some risks are high and others are low is not as useful as knowing that a particular mitigation has a 230 percent return on investment (ROI) and another has only a 5 percent ROI or whether the total risks are within our risk tolerance or not.

      We will spend some time on several of the previously mentioned methods of assessing performance, but we will be spending a greater share of our time on component testing. This is due, in part, to the fact that there is so much research on the performance of various components, such as methods of improving subjective estimates, the performance of quantitative methods, using simulations, aggregating expert opinion, and more.

      Still, even if risk managers use only component testing in their risk management process, many are likely to find serious shortcomings in their current approach. Many of the components of popular risk management methods have no evidence of whether they work, and some components have shown clear evidence of adding error. Still other components, though not widely used, can be shown to produce convincing improvements compared to the alternatives.

СКАЧАТЬ