Название: Trust in Computer Systems and the Cloud
Автор: Mike Bursell
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119692317
isbn:
Given the pervasiveness of cognitive biases—further study of the field is definitely worthwhile for anyone involved with security, risk, or trust—it should come as no surprise that there are examples particularly relevant to trust. Amos Tversky and Daniel Kahnemann38 identify a multitude of different biases to which humans are prone when looking at statistical data, of which two are particularly relevant:
Misconceptions of Regression Regression suggests that if a particular sample is above a mean, the next sample is likely to be below the mean. This can lead to misconceptions such that punishing a bad event is more effective—as it leads to a (supposedly causal) improvement—than rewarding a good event—as this leads to a (supposedly causal) deterioration. The failure to understand this effect tends to lead people to “overestimate the effectiveness of punishment and to underestimate the effectiveness of reward”. This feels like a particularly relevant piece of knowledge to take into a series of games like the Prisoner's Dilemma, as one is most likely to be rewarded for punishing others, yet most likely to be punished for rewarding them.39 More generally, in a system where trust is important, and needs to be encouraged, trying to avoid this bias may be a core goal in the design of the system.
Biases on the Evaluation of Conjunctive and Disjunctive Events People are bad at realising that a number of improbable but conjunctive events are likely to yield a bad outcome. This is important to us when we consider chains of trust, which we will examine in Chapter 3, “Trust Operations and Alternatives”, as the chance of having a single broken chain of trust where all the links in the chain enjoy a high probability of trustworthiness may still in reality be fairly high.
How relevant is this to us? For any trust relationships where humans are involved as the trustors, it is immediately clear that we need to be careful. There are multiple ways in which the trustor may misunderstand what is really going on or just be fooled by their cognitive biases. We have talked several times in this chapter about humans' continued problems with making rational choices in, for instance, game theoretical situations. The same goes for economic or purchasing decisions and a wide variety of other spheres. An understanding—or at least awareness of—cognitive biases can go a long way in trying to help humans to make more rational decisions. Sadly, while many of us involved with computing, IT security, and related fields would like to think of ourselves as fully rational and immune to cognitive biases, the truth is that we are as prone to them as all other humans, as noted in our examples of normalcy bias and observer-expectancy effect. We need to remember that when we consider the systems we are designing to be trustors and trustees, our unconscious biases are bound to come into play—a typical example being that we tend to assume that a system that we design will be more secure than a system that someone else designs.
Another point is that when we are evaluating trust relationships, we need to realise that we will always have incomplete information. This should encourage us to look for further information where possible but also to be aware that our brains may not even be allowing us to make correct—that is, rational—use of the information we have, or may be creating or falsifying information that we think we do have.
Trying to apply our definition of trust to ourselves is probably a step too far, as we are likely to find ourselves delving into questions of the conscious, subconscious, and unconscious, which are not only hotly contested after well over a century of study in the West, and over several millennia in the East, but are also outside the scope of this book. However, all of the preceding points are excellent reasons for being as explicit as possible about the definition and management of trust relationships and using our definition to specify all of the entities, assurances, contexts, etc. Even if we cannot be sure exactly how our brain is acting, the very act of definition may help us to consider what cognitive biases are at play; and the act of having others review the definition may uncover further biases, allowing for a stronger—more rational—definition. In other words, the act of observing our own thoughts with as much impartiality as possible allows us, over time, to lessen the power of our cognitive biases, though particularly strong biases may require more direct approaches to remedy or even recognise.
Trusting Others
Having considered the vexing question of whether we can trust ourselves, we should now turn our attention to trusting others. In this context, we are still talking about humans rather than institutions or computers, and we will be applying these lessons to computers and systems. What is more, as we noted when discussing cognitive bias, our assumptions about others—and the systems they build—will have an impact on how we design and operate systems involved with trust. Given the huge corpus of literature in this area, we will not attempt to go over much of it, but it is worth considering if there are any points we have come across already that may be useful to us or any related work that might cause us to sit back and look at our specific set of interests in a different light.
The first point to bear in mind when thinking about trusting others, of course, is all that we have learned from the discussions of cognitive bias in the previous section. In other words, other human entities are just as prone to cognitive bias as we are, and also just as unaware of it. Whenever we consider a trust relationship to another human, or consider a trust relationship that someone else has defined or designed—a relationship, for instance, that we are reviewing for them or another entity—then we have to realise not only that they may be acting irrationally but also that they are likely to believe that they are acting rationally, even given evidence to the contrary.40 Stepping away from the complexity of cognitive bias, what other issues should we examine when we consider whether we can trust other humans? We looked briefly, at the beginning of this chapter, at some of the definitions preferred in the literature around trust between humans, and it is clear both that there is too much to review here and also that much of it will not be relevant. Nevertheless, it is worth considering—as we have with regards to cognitive bias—if any particular concerns may be worthy of examination. We noted, when looking at the Prisoner's Dilemma, that some strategies are more likely to yield positive results than others. Axelrod's work noted that increasing opportunities for cooperation can improve outcomes, but given that the Prisoner's Dilemma sets out as one of its conditions that communication is not allowed, such cooperation must be tacit. Given that we are considering a wider set of interactions, there is no need for us to adopt this condition (and some of the literature that we have already reviewed seems to follow this direction), and it is worth being aware of work that specifically considers the impact when various parties are allowed to communicate.
One such study by Morton Deutsch and Robert M. Krauss41 looked at differences in bargaining when partners can communicate with each other or not (or unilaterally) and when they can threaten each other or not (or unilaterally). Their conclusions, brutally relevant during the Cold War period in which they were writing, were that bilateral positions of threat—where both partners could threaten the other—were “most dangerous” and that the ability to communicate made less difference than expected. This may lead to an extrapolation to non-human systems that is extremely important: that it is possible to build—hopefully unwittingly—positive feedback loops into automated systems that can lead to very negative consequences. Probably the most famous fictional example of this is the game Global Thermonuclear War played in the film WarGames,42 where an artificial intelligence connected to the US nuclear arsenal nearly starts World War III.
Schneier talks about the impact that moral systems may have on cooperation between СКАЧАТЬ