Название: Trust in Computer Systems and the Cloud
Автор: Mike Bursell
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119692317
isbn:
One difference that we will encounter when we start examining trust relationships to computer systems, however, is that the opportunities for direct sensory monitoring of actions are likely to be more limited than in human-to-human trust relationships. When monitoring human actions, they are often readily apparent, but the same is not true for many computer-performed actions. If I request via a web browser that a banking application transfer funds between one account and another, the only visible effect I am likely to see is an acknowledgement on the screen. Until I get to the point of trying to spend or withdraw that money,8 I realistically have no way to be assured that the transaction has taken place. It is as if I have a trust relationship with somebody around the corner of a street, out of view, that they will raise a red flag at the stroke of noon; and I have a friend standing on the corner who will watch the person and tell me when and if they raise the flag. I may be happy with this arrangement, but only because I have a trust relationship to the friend: that friend is acting as a trusted channel for information.
The word friend was chosen carefully because a trust relationship is already implicit in the set of interactions that we usually associate with someone described as a friend. The same is not true for the word somebody, which I used to denote the person who was to raise the flag. The situation as described is likely to make our minds presume that there is a fairly high probability that the trust relationship I have to the friend is sufficient to assure me that they will pass the information correctly. But what if my friend standing on the corner is actually a business partner of the flag-waver? Given our human understanding of the trust relationships typically involved with business partnerships, we may immediately begin to assume that my friend's motivations in respect to correct reporting are not neutral.
The example of the flag was chosen as a simple one: it is binary (either performed or not performed), and the activity was not imbued with any further significance. If, however, we give or associate our example with some value—say, a signal that money should be transferred into my bank account from the flag-waver and their business partner's bank account—it is clear that a lot more is at stake. If, for example, my friend chooses to favour the business relationship over our friendship, my friend might collude with the flag-waver and tell me that they raised the red flag even when they did not. Or, my friend might tell me the flag-waver has not raised the flag when they have, colluding with a third party with the intention of defrauding both me and the third party by somehow accessing the funds in my bank account that I do not believe to have been transferred.
The channels for reporting on actions—i.e., monitoring them—are vitally important within trust relationships. It is both easy and dangerous to fall into the trap of assuming they are neutral, with the only important one being between me and the acting party. In reality, the trust relationship that I have to a set of channels is key to maintaining the trust relationships that I have to the main actor that is the monitor—who or what we could call the primary trustee. In trust relationships involving computer systems, there are often multiple entities or components involved in actions, and these form a chain of trust where each link depends on the other: the chain is typically only as strong as the weakest of its links.
There is a further complication to be considered, which is how rarely we have control over—and can therefore have significant trust towards—the reporting or monitoring of channels in a relationship to a computer system. Channels are often under the control of the acting party or can be compromised in ways over which I have very little control. Let us look back again at the example of the bank transfer via a web browser. The reporting of the transfer is via the application, which is presumably written by the bank or at least is under the control of the bank: I have no control of this channel. Even if I am using a downloaded application and it is running on my computer or mobile phone—and even if it is open source,9 meaning I have the chance to check that what it is displaying is “correct”—I cannot be sure that it is “valid” as the information it has comes from the bank, in the final analysis. In terms of compromise, I can at least take some steps to decrease vulnerabilities and, therefore, the chance of compromise, for the pieces over which I have control: I can keep my computer or phone as secure as possible, check that the versions of any downloaded applications—web browser or banking application, for instance—are verified as being from the expected suppliers, and check that the connection they have to the bank is as secure as I can make it.
In the end, however, it is up to my bank to provide valid information and ensure its correctness, though I will be the one who pays for these measures and am likely to bear any cost of invalid data: security economics raises its head again. This discussion about trust chains and monitoring will reappear later in the book as an important issue when designing and managing trust.
One final point to bear in mind about Gambetta's definition of trust is his description of the subjective probability with which an assessment is made about whether actions will be performed. We dropped this phrase in our definition, replacing it with assurance, as subjective probability frankly seems difficult to quantify or apply. The use of the word assurance should not be taken to imply 100% certainty, but words like subjective are problematic. The first reason is that the subject in this case may not be human. The second is because if we are to tie trust relationships back to discussions of risk, then objectivity and external qualifiability are insufficient: we also require external quantifiability.
We have examined, then, a variety of different trust definitions in the human realm, though none of them seems a perfect fit for our needs. Before we throw all of these out with no further consideration, however, there is an interesting question about the overlap between human-to-human relationships and human-to-computer relationships when the computer has a closely coupled relationship with an organisation. This is different to case 3 that we discussed in Chapter 1, when we discussed the relationship between a bank and its systems, and more like case 2, where my trust relationship to the bank, and the bank's relationship to me, are characterised by interactions largely with their computer systems. In this case, punishment or other social impacts (positive or negative) may be more relevant, as we may be able to relate them to people rather than to the computers with which the actual interaction takes place. We will return to this question later, once we have addressed questions around trust to institutions—which is related but distinct—later in this chapter.
Game Theory
Much of the recent discussion around trust in the human-to-human realm has revolved around game theory and cooperation, either separately or in conjunction. Game theory is the study of social interactions from a theoretical point of view, typically looking at strategies for interaction that yield particular outcomes (usually the “best” for an actor or set of actors). Different styles of interaction, competition, and cooperation are studied, and mathematical approaches are applied, usually with the proviso that the actors are “rational”10 though not necessarily in possession of perfect information about their situation.
The Prisoner's Dilemma
The best-known example of game theory is the Prisoner's Dilemma. The classic scenario of the Prisoner's Dilemma is where two members of a criminal gang are arrested and held separately from each other so that they cannot communicate. Each of them is presented with a choice: they may either stay silent or betray the other. Each is told the consequence of whichever action they may decide to take, combined with that of the fellow gang member. There are four СКАЧАТЬ