Название: Trust in Computer Systems and the Cloud
Автор: Mike Bursell
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119692317
isbn:
To return to the more general anti-authority, pro-individualist movement, the problem with trusting only in oneself is that it makes it almost impossible to build systems and processes involving other people in ways that allow for any useful cooperation or economies of scale or scope. Authorities of some type do end up being important to our larger set of requirements, and even movements that aim to reduce the number of trust relationships to as few as possible generally recognise the need for authorities in some guise or another. A good example of this is oracles, a concept within the field of blockchain that accepts the need to trust information from certain sources. Equally, standards—whether formal or de facto—are typically vital in allowing individual entities to work together, two classic historical examples being the regularisation of time across the United Kingdom with the rise of the railway and the standardisation of the systems of measurement that allowed government, commerce, and science to collaborate with less friction and confusion (the canonical example of this within the science community is the loss of a Mars Rover in 1999, due to a lack of standardisation on a particular measurement—metric or imperial units34—but the problem has been around for much longer than this35). We can expect that as we delve deeper into considerations of trust, we will need to consider what authorities we need to establish a trust relationship with, and the question of endorsement: one of the most troubling concerns around existing discussions of trust is how often such relationships are created with little or no consideration, and sometimes just assumed, leaving implicit relationships that, as they are not stated, cannot be critically examined.
Trusting Individuals
Having spent some time considering the questions associated with trusting institutions of various types, we need to look at issues around trusting individuals. In what may seem like a strange move, we are going to start by asking whether we can even trust ourselves.
Trusting Ourselves
William Gibson's novel Virtual Light36 includes a bar that goes by the name of Cognitive Dissidents. I noticed this a few months ago when I was reading the book in bed, and it seemed apposite because I wanted to write a blog article about cognitive bias and cognitive dissonance (on which the bar's name is a play on words) as a form of cognitive bias. What is more, the fact that the bar's name had struck me so forcibly was, in fact, exactly that: a form of cognitive bias. In this case, it is an example of the frequency illusion, or the Baader-Meinhof effect, where a name or word that has recently come to your attention suddenly seems to be everywhere you turn. Like such words, cognitive biases are everywhere: there are more of them than we might expect, and they are more dangerous than we might initially realise.
The problem is that we think of ourselves as rational beings, but it is clear from decades, and in some cases centuries, of research that we are anything but. Humans are very likely to tell themselves that they are rational, and it is such a common fallacy that there is even a specific cognitive bias that describes how we believe it: the illusion of validity. One definition of cognitive biases found in Wikipedia37 is “systematic patterns of deviation from norm or rationality in judgment”. We could offer a simpler one: “our brains managing to think things that seem sensible but are not”.
Wikipedia provides many examples of cognitive bias, and they may seem irrelevant to our quest. However, as we consider risk—which, as discussed in Chapter 1, is what we are trying to manage when we build trust relationships to other entities—there needs to be a better understanding of our own cognitive biases and those of the people around us. We like to believe that we and they make decisions and recommendations rationally, but the study of cognitive bias provides ample evidence that:
We generally do not.
Even if we do, we should not expect those to whom we present them to consider them entirely rationally.
There are opportunities for abuse here. There are techniques beloved of advertisers and the media to manipulate our thinking to their ends, which we could use to our advantage and to try to manipulate others. One example is the framing effect. If you do not want your management to fund a new anti-virus product because you have other ideas for the currently ear-marked funding, you might say:
“Our current product is 80% effective!”
Whereas if you do want them to fund it, you might say:
“Our current product is 20% ineffective!”
People generally react in different ways depending on how the same information is presented, and the way each of these two statements is framed aims to manipulate your listeners to the outcome you have in mind. We should be aware of—and avoid—such tricks, as such framing techniques can subvert conversations and decision-making around risk.
Three further examples of cognitive bias serve to show how risk calculations may be manipulated, either by presentation or just by changing the thought processes of those making calculations:
Irrational Escalation, or the Fallacy of Sunk Costs This is the tendency for people to keep throwing money or resources at a project, vendor, or product when it is clear that it is no longer worth it, with the rationale that to stop spending money (or resources) now would waste what has already been spent—despite the fact that those resources have already been consumed. This often comes over as misplaced pride or people not wishing to let go of a pet project because they have become attached to it, but it is really dangerous for security. If something clearly is not effective, it should be thrown out, rather than good money being sent after bad.
Normalcy Bias This is the refusal to address a risk because the event associated with it has never happened before. It is an interesting one when considering security and risk, for the simple reason that so many products and vendors are predicated on exactly that: protecting organisations from events that have so far not occurred. The appropriate response is to perform a thorough risk analysis and then put measures in place to deal with those risks that are truly high priority, not those that may not happen or that do not seem likely at first glance.
Observer-Expectancy Effect This is when people who are looking for a particular result find it because they have (consciously or unconsciously) misused the data. It is common in situations such as those where there is a belief that a particular attack or threat is likely, and the data available (log files, for instance) are used in a way that confirms this expectation СКАЧАТЬ