Название: From Traditional Fault Tolerance to Blockchain
Автор: Wenbing Zhao
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119682110
isbn:
A cryptographic hash function would hash any given message P and produce a fixed-length bit-string, and it must satisfy a number of requirements:
◾ The hash function must be efficient, that is, given a message P, the hash value of P, Hash(P), must be quickly computed.
◾ Given Hash(P), it is virtually impossible to find P. In this context, P is often referred to the preimage of the hash. In other words, this requirement says it is virtually impossible to find a preimage of a hash. It is easy to understand that if P is much longer than Hash(P) in size, this requirement can be easily satisfied because information must have been lost during the hash processing. However, even if P is shorter than Hash(P), the requirement must still hold.
◾ Given a message P, and the corresponding hash of P, Hash(P), it is virtually impossible to find a different message P′ that would produce exactly the same hash, that is, Hash(P) = Hash(P′). If the unfortunate event happens where Hash(P) = Hash(P′), we would say there is collision. This requirement states that it should be computationally prohibitive to find a collision.
The cryptographic hash function must consider every single bit in the message when producing the hash string so that even if a single bit is changed, the output would be totally different. There has been several generations of cryptographic hash functions. Currently the most common ones used are called secure hash algorithms (SHA), which are published as a federal information processing standard by the US National Institute of Standards and Technology. The SHA family of algorithms have four categories: SHA-0, SHA-1, SHA-2, and SHA-3. SHA-0 and SHA-1 both produce a 160-bit string, which are now considered obsolete. SHA-2, which produces a 256-bit string or a 512-bit string, is used commonly nowadays.
Digital signature is another very important cryptographic construct in building secure systems. A digital signature mimics a physical signature in legal documents, and it must possess the following properties:
◾ The receiver of a digitally signed document can verify the signer’s identity. This is to facilitate authentication of the signer. Unlike in real world, where an official could verify the signer identity by checking for government-issued identification document such as driver’s license or passport, the digital signature must be designed in a way that a remote receiver of the digital signature can authenticate the signer based on the digital signature alone.
◾ The signer of the digital signature cannot repudiate the signed document once it has been signed.
◾ No one other than the original signer of the signed document could possibly have fabricated the signature.
The first property is for authenticating the signer of a signed document. The second and the third properties are essentially the same because if another person could have fabricated the digital signature, then the original signer could in fact repudiate the signed document. In other words, if the original signer cannot repudiate the signed document, then it must be true that no one else could fabricate the digital signature. Digital signatures are typically produced by using public-key cryptography on the hash of a document. This hash of a document is typically called message digest. The message digest is used because public-key cryptography must use long-keys and it is computationally very expensive compared with symmetric cryptography. In this case, the no-collision requirement for secure hash functions is essential to protect the integrity of digital signatures.
Message authentication code (MAC) is based on secure hash function and symmetric key encryption. More specifically, the sender would concatenate the message to be sent and a security key together, then hash it to produce a MAC. It is used pervasively in message exchanges to both authenticate the sender and to protect the integrity of the message. The basis for authentication is that only the sender and the receiver would know the security key used to generate the MAC. Because of the characteristic of the secure hash function, if any bit in the message is altered during transmission, the transmitted MAC would differ from the one recomputed at the receiver. Hence, the MAC is also used as a form of checksum with much stronger protection than traditional checksum method such as CRC16.
In conventional systems, communication between a client and server is done over a session. Hence, security mechanisms were designed around this need. At the beginning of the session, the client and the server would mutually authenticate each other. Once the authentication step is done, a session key would be created and used to encrypt all messages exchanged within the session. For a prolonged session, the session key might be refreshed. For sessions conducted over the Web, the secure socket layer (SSL) (or transport layer security) protocol is typically used. The server authentication is done via a digital signature and public-key certificate protected by a public-key infrastructure. Client authentication is typically done via user-name and password. Some enterprise systems, such as directory services, adopt much more sophisticated authentication algorithms based on the challenge-response approach.
REFERENCES
1. A. Avizienis and L. Chen. On the implementation of n-version programming for software fault tolerance during execution. In Proceedings of the IEEE International Computer Software and Applications Conference, pages 149–155, 1977.
2. A. Avizienis, J. C. Laprie, B. Randell, and C. Landwehr. Basic concepts and taxonomy of dependable and secure computing. IEEE Transactions on Dependable and Secure Computing, 1(1):11–33, 2004.
3. M. Y. Chen, E. Kiciman, E. Fratkin, A. Fox, and E. Brewer. Pinpoint: Problem determination in large, dynamic internet services. In Proceedings of the 2002 International Conference on Dependable Systems and Networks, DSN ’02, pages 595–604, Washington, DC, USA, 2002. IEEE Computer Society.
4. M. Franz. Understanding and countering insider threats in software development. In Proceedings of the International MCETECH Conference on e-Technologies, pages 81–90, January 2008.
5. L. Lamport, R. Shostak, and M. Pease. The byzantine generals problem. ACM Transactions on Programming Languages and Systems, 4:382–401, 1982.
6. P. M. Melliar-Smith and B. Randell. Software reliability: The role of programmed exception handling. In Proceedings of an ACM conference on Language design for reliable software, pages 95–100, New York, NY, USA, 1977. ACM.
7. C. P. Pfleeger, S. L. Pfleeger, and J. Margulies. Security in Computing (5th Ed.). Pearson, 2015.
8. B. Randell and J. Xu. The evolution of the recovery block concept. In Software Fault Tolerance, pages 1–22. John Wiley & Sons Ltd, 1994.
9. J. Shore. Fail fast. IEEE Software, pages 21–25, September/October 2004.
10. A. S. Tanenbaum and M. V. Steen. Distributed Systems: Principles and Paradigms. Prentice Hall, 2nd edition, 2006.
11. A. S. Tanenbaum and D. J. Wetherall. Computer Networks (5th Ed.). Pearson, 2010.
12. L. Tewksbury, L. Moser, and P. Melliar-Smith. Live upgrade techniques for corba applications. In New Developments in Distributed Applications and Interoperable Systems, volume 70 of IFIP International Federation for Information Processing, pages 257–271. Springer US, 2002.
2
Logging and Checkpointing
Checkpointing СКАЧАТЬ