Security Engineering. Ross Anderson
Чтение книги онлайн.

Читать онлайн книгу Security Engineering - Ross Anderson страница 26

Название: Security Engineering

Автор: Ross Anderson

Издательство: John Wiley & Sons Limited

Жанр: Зарубежная компьютерная литература

Серия:

isbn: 9781119642817

isbn:

СКАЧАТЬ in 2019, Europe banned a children's watch that used unencrypted communications to the vendor's cloud service; a wiretapper could download any child's location history and cause their watch to phone any number in the world. When this was discovered, the EU ordered the immediate safety recall of all watches [903].

      This book aims to help you avoid such outcomes. To design systems that are safe and secure, an engineer needs to know about what systems there are, how they work, and – at least as important – how they have failed in the past. Civil engineers learn far more from the one bridge that falls down than from the hundred that stay up; exactly the same holds in security engineering.

      1 a product or component, such as a cryptographic protocol, a smartcard, or the hardware of a phone, a laptop or server;

      2 one or more of the above plus an operating system, communications and other infrastructure;

      3 the above plus one or more applications (banking app, health app, media player, browser, accounts/payroll package, and so on – including both client and cloud components);

      4 any or all of the above plus IT staff;

      5 any or all of the above plus internal users and management;

      6 any or all of the above plus customers and other external users.

      Confusion between the above definitions is a fertile source of errors and vulnerabilities. Broadly speaking, the vendor and evaluator communities focus on the first and (occasionally) the second of them, while a business will focus on the sixth (and occasionally the fifth). We will come across many examples of systems that were advertised or even certified as secure because the hardware was, but that broke badly when a particular application was run, or when the equipment was used in a way the designers didn't anticipate. Ignoring the human components, and thus neglecting usability issues, is one of the largest causes of security failure. So we will generally use definition 6; when we take a more restrictive view, it should be clear from the context.

      The next set of problems comes from lack of clarity about who the players are and what they're trying to prove. In the literature on security and cryptology, it's a convention that principals in security protocols are identified by names chosen with (usually) successive initial letters – much like hurricanes, except that we use alternating genders. So we see lots of statements such as “Alice authenticates herself to Bob”. This makes things much more readable, but can come at the expense of precision. Do we mean that Alice proves to Bob that her name actually is Alice, or that she proves she's got a particular credential? Do we mean that the authentication is done by Alice the human being, or by a smartcard or software tool acting as Alice's agent? In that case, are we sure it's Alice, and not perhaps Carol to whom Alice lent her card, or David who stole her phone, or Eve who hacked her laptop?

      By a subject I will mean a physical person in any role including that of an operator, principal or victim. By a person, I will mean either a physical person or a legal person such as a company or government1.

      A principal is an entity that participates in a security system. This entity can be a subject, a person, a role, or a piece of equipment such as a laptop, phone, smartcard, or card reader. A principal can also be a communications channel (which might be a port number, or a crypto key, depending on the circumstance). A principal can also be a compound of other principals; examples are a group (Alice or Bob), a conjunction (Alice and Bob acting together), a compound role (Alice acting as Bob's manager) and a delegation (Bob acting for Alice in her absence).

      Beware that groups and roles are not the same. By a group I will mean a set of principals, while a role is a set of functions assumed by different persons in succession (such as ‘the officer of the watch on the USS Nimitz’ or ‘the president for the time being of the Icelandic Medical Association’). A principal may be considered at more than one level of abstraction: e.g. ‘Bob acting for Alice in her absence’ might mean ‘Bob's smartcard representing Bob who is acting for Alice in her absence’ or even ‘Bob operating Alice's smartcard in her absence’. When we have to consider more detail, I'll be more specific.

      The meaning of the word identity is controversial. When we have to be careful, I will use it to mean a correspondence between the names of two principals signifying that they refer to the same person or equipment. For example, it may be important to know that the Bob in ‘Alice acting as Bob's manager’ is the same as the Bob in ‘Bob acting as Charlie's manager’ and in ‘Bob as branch manager signing a bank draft jointly with David’. Often, identity is abused to mean simply ‘name’, an abuse entrenched by such phrases as ‘user identity’ and ‘citizen identity card’.

      The definitions of trust and trustworthy are often confused. The following example illustrates the difference: if an NSA employee is observed in a toilet stall at Baltimore Washington International airport selling key material to a Chinese diplomat, then (assuming his operation was not authorized) we can describe him as ‘trusted but not trustworthy’. I use the NSA definition that a trusted system or component is one whose failure can break the security policy, while a trustworthy system or component is one that won't fail.

      There are many alternative definitions of trust. In the corporate world, trusted system might be ‘a system which won't get me fired if it gets hacked on my watch’ or even ‘a system which we can insure’. But when I mean an approved system, an insurable system or an insured system, I'll say so.

       Secrecy is an engineering term that refers to the effect of the mechanisms used to limit the number of principals who can access information, such as cryptography or computer access controls.

       Confidentiality involves an obligation to protect some other person's or organisation's secrets if you know them.

       Privacy is the ability and/or right to protect your personal information and extends to the ability and/or right to prevent invasions of your personal space (the exact definition of which varies from one country to another). Privacy can extend to families but not to legal persons such as corporations.

      For example, hospital patients have a right to privacy, and in order to uphold this right the doctors, nurses and other staff have a duty of confidence towards their patients. The hospital has no right of privacy in respect of its business dealings but those employees who are privy to them may have a duty of confidence (unless they invoke a whistleblowing right to expose wrongdoing). Typically, privacy is secrecy for the benefit of the individual while confidentiality is secrecy for the benefit of the organisation.

      There is a further complexity in that it's often not sufficient to protect data, such as the contents of messages; we also have to protect metadata, such as logs of who spoke to whom. For example, СКАЧАТЬ