Название: The Failure of Risk Management
Автор: Douglas W. Hubbard
Издательство: John Wiley & Sons Limited
Жанр: Ценные бумаги, инвестиции
isbn: 9781119522041
isbn:
Behavioral research into qualitative scales: If we rely on various scoring or classification methods (e.g., a scale of 1 to 5 or high/medium/low), we should consider the results of empirical research on how these methods are actually used and how much arbitrary features of the scales effect how they are used.
Decomposition: We can look into research about how estimates can be improved by how we break up a problem into pieces and how we assess uncertainty about those pieces.
Errors in quantitative models: If we are using more quantitative models and computer simulations, we should be aware of the most common known errors in such models. We also need to check to see whether the sources of the data in the model are based on methods that have proven track records of making realistic forecasts.
If we are using models such as AHP, MAUT, or similar systems of decision analysis for the assessments of risk, they should meet the same standard of a measurable track record of reliable predictions. We should also be aware of some of the known mathematical flaws introduced by some methods that periodically cause nonsensical results.
Formal Errors
Outright math errors should be the most obvious disqualifiers of a method, and we will find them in some cases. This isn't just a matter of making simplifying assumptions or using shortcut rules of thumb. Those can be useful as long as there is at least empirical evidence that they are helpful. But where we deviate from the math, empirical evidence is even more important. This is especially true when deviations from known mathematics provide no benefits in simplicity compared to perfectly valid mathematical solutions—which is often the main case for taking mathematical shortcuts.
In some cases, it can be shown that mathematically irregular methods may actually lead to dangerously misguided decisions. For example, we shouldn't be adding and multiplying ordinal scales, as is done in many risk assessment methods. We will show later some formal analysis how such procedures lead to misguided conclusions.
A Check of Completeness
Even if we use the best methods, we can't apply them to a risk if we don't even think to identify it as a risk. If a firm thinks of risk management as “enterprise risk management,” then it ought to be considering all the major risks of the enterprise—not just legal, not just investment portfolio, not just product liability, not just worker safety, not just business continuity, not just security, and so on. This criterion is not, however, the same as saying that risk management can succeed only if all possible risks are identified. Even the most prudent organization will exclude risks that nobody could conceivably have considered.
But there are widely known risks that are excluded from some risk management for no other reason than an accident of organizational scope or background of the risk manager. If the scope of risk management in the firm has evolved in such a way that it considers risk only from a legal or a security point of view, then it is systematically ignoring many significant risks. A risk that is not even on the radar can't be managed at all.
The surveys previously mentioned and many “formal methodologies” developed detailed taxonomies of risks to consider, and each taxonomy is different from the others. But completeness in risk management is a matter of degree. The use of a detailed taxonomy is helpful, but it is no guarantee that relevant risks will be identified.
More important, risks should not be excluded simply because they are speaking about risks in completely different languages. For example, cyber risk, financial portfolio risk, safety risk, and project risk do not need to use fundamentally different languages when discussing risk. If project risks are 42, cyber risks are yellow, safety risks are moderate, portfolio risks have a Sharpe Ratio of 1.1, and there is a 5 percent chance a new product will fail to break even, what is the total risk? They can and should be using the same types of metrics so risks across the enterprise can be considered comprehensively.
A risk manager should always assume that the list of considered risks, no matter how extensive, is incomplete. All we can do is increase completeness by continual assessment of risks from several angles and compare them with a common set of metrics. In part 3, we will discuss some angles to consider when developing a taxonomy in the hope that it might help the reader think of previously excluded risks.
Answering the Right Question
The first and simplest test of a risk management method is determining if it answers the relevant question, “Where and how much do we reduce risk and at what cost?” A method that answers this, explicitly and specifically, passes this test. If a method leaves this question open, it does not pass the test—and many will not pass.
For example, simply providing a list of a firm's top ten risks or classifying risks into high, medium, or low doesn't close the loop. Certainly, this is a necessary and early step of any risk management method. I have sometimes heard that such a method is useful if only it helps to start the conversation. Yes, that may be useful, but if it stops there it still leaves the heavy lifting yet to be done. Consider an architectural firm that provides a list of important features of a new building such as “large boardroom,” “nice open entry way with a fountain,” and then walks away without producing detailed plans much less actually constructing the building. Such a list would be a starting point but it is far short of a usable plan, much less detailed blueprints or a finished building.
Relevant risk management should be based on risk assessment that ultimately follows through to explicit recommendations on decisions. Should an organization spend $2 million to reduce its second largest risk x by half, or spend the same amount to eliminate three risks that aren't in the top five biggest risks? Ideally, risk mitigation can be evaluated as a kind of “return on mitigation” so that different mitigation strategies of different costs can be prioritized explicitly. Merely knowing that some risks are high and others are low is not as useful as knowing that a particular mitigation has a 230 percent return on investment (ROI) and another has only a 5 percent ROI or whether the total risks are within our risk tolerance or not.
WHAT WE MAY FIND
We will spend some time on several of the previously mentioned methods of assessing performance, but we will be spending a greater share of our time on component testing. This is due, in part, to the fact that there is so much research on the performance of various components, such as methods of improving subjective estimates, the performance of quantitative methods, using simulations, aggregating expert opinion, and more.
Still, even if risk managers use only component testing in their risk management process, many are likely to find serious shortcomings in their current approach. Many of the components of popular risk management methods have no evidence of whether they work, and some components have shown clear evidence of adding error. Still other components, though not widely used, can be shown to produce convincing improvements compared to the alternatives.
Lacking real evidence of effectiveness, some practitioners will employ some of the previously mentioned defenses. We will address at least some of those arguments in subsequent chapters, and we will show how some of those same arguments could have been used to make the case for the “validity” of astrology, numerology, or crystal healing. When managers can begin to differentiate the astrology from the astronomy, then they can begin to adopt methods that work.