Distributed Computing Pearls. Gadi Taubenfeld
Чтение книги онлайн.

Читать онлайн книгу Distributed Computing Pearls - Gadi Taubenfeld страница 5

СКАЧАТЬ Another type of synchronization has to do with cooperation. You and your spouse might need to move a heavy table together to its new location (it is too heavy for just one person). A classical example of cooperation is for two camps of the same army to decide on the exact time for a coordinated attack on the enemy camp.

      We point out that the use of the term synchronization in computer science is slightly more general than its use in standard English. The following quote from the Oxford dictionary explains this point, “The use of synchronize to mean coordinate or combine as in ‘We must synchronize our efforts’ is considered incorrect by some people and should be avoided in standard English.” In computer science, synchronization also means coordination. That is, synchronization between processors is classified as either contention or coordination.

      All the above examples for synchronization between people have similar examples for synchronization between computers. Synchronization is needed in all systems and environments where several processors can be active at the same time. Without proper synchronization, the integrity of the data may be destroyed if two computers update a common file at the same time, and as a result, deposits and withdrawals could be lost, confirmed reservations might have disappeared, etc. However, while achieving synchronization between humans is sometimes relatively easy, achieving synchronization between computers is challenging and difficult. The reason is that most computers communicate with each other in a very restricted way.

      While humans can see and hear each other, computers, and computing devices, in general, can in most cases only read and write. So, one computer can write a note (or send a message) that the other computer will later read, but they cannot see each other. To understand the difficulty with this type of restricted communication, the next two chapters examine several simple two-person interactions where communication is restricted either to writing and reading of notes or to sending and receiving of messages.

      The notion of an algorithm is a central notion in computer science. An algorithm is just the recipe upon which a problem is solved. It was originally used in the context of solving mathematical problems. Euclid, the famous Greek mathematician, invented sometime between 400 and 300 B.C., an algorithm for finding the greatest common divisor of two possible integers. For example, the greatest common divisor of 18 and 27 is 9. This algorithm is considered to be the first nontrivial mathematical algorithm ever devised.

      The word algorithm is derived from the name of the Persian mathematician Mohammed al-Khowârizmî, who lived in Baghdad during the 9th century. Al-Khowârizmî laid out the basic algorithms for adding, multiplying, and dividing numbers, and for extracting square roots. On a computer, an algorithm is expressed as a computer program which specifies, in the exact syntax of some programming language, the computation one expects a computer to perform. A recipe for preparing a cake, which prescribes the activities needed for preparing the cake, is also an algorithm. Such a recipe can be expressed in many different natural languages.

      A plan or a strategy for winning in a game or solving a puzzle is also an algorithm. Thus, throughout the book, we shall use the terms, an algorithm, a plan, a strategy, or a solution, interchangeably. In most chapters of the book, we explain fundamental concepts which involve concurrency and synchronization between computers, by examining situations and solving problems which relate to interactions between people where communication is restricted in various ways. Thus, we shall use the terms a plan or a solution, more often than we shall use the term an algorithm.

      A concurrent or distributed algorithm is the recipe upon which a problem is solved by more than just one computing element. Finding the largest number in a set of numbers by first dividing the set into two subsets, using two processors to find the maximum number in each subset, and then comparing these two numbers, is an example of a simple concurrent algorithm. (Such an algorithm should also specify how to find the maximum in each subset.)

      The term distributed algorithms refers to algorithms where the computing elements are physically far away from each other and communicate by sending and receiving messages (as done on the Internet). The term concurrent algorithms refers to algorithms where the computing elements are physically very close to each other and communicate by reading from and writing to shared memory locations (as done inside a multiprocessor computer). In the field of distributed computing, both types of algorithms are studied.

      When a processor executes a computer program (such as a web browser), the execution itself is called a process. A process runs on a processor, which is the physical hardware. The physical location of the different processes or processors—we shall use the terms interchangeably—involved in a single concurrent activity can be anywhere from the same computer to different computers anywhere in the world.

      There are two main technological underpinnings of the fascinating rapid developments of computer networks and multiprocessor computers. The first is, of course, the advances in the design of faster hardware. The second is the development of efficient concurrent and distributed algorithms for supporting complex interactions between processors and computers. This book tells the story of the development of such algorithms. It is a fascinating story!

      In addition to the study of concurrent and distributed algorithms, the field of distributed computing also explores the inherent limitations of distributed systems: what problems cannot be solved in particular systems. Identifying features of a specific distributed system architecture that make it inadequate for solving certain problems is crucial for the design of better systems which can overcome such limitations.

      Impossibility results help us understand the crucial limitations of real systems, why certain systems are (computationally) weak while others are powerful, when should we stop looking for a solution for a given problem, and how to adjust a problem statement or a system model to overcome an impossibility result.

      Impossibility results usually depend on assumptions about: how the computing elements communicate with one another, what kinds of failures may occur, or whether randomization can be used. Such results are usually hard to prove.

      Through the book, we will introduce and discuss some of the most fundamental impossibility results of distributed computing. We will highlight the insights gained from these results so that the reader can understand and appreciate their utmost importance.

      In 1968, Edsger Wybe Dijkstra, one of the most influential members of computing science’s founding generation, published his famous paper “Co-operating Sequential Processes” [14], that originated the field of concurrent programming. A few concepts and results from Dijkstra’s papers are covered in Chapters 2 and 7.

      The Internet traces its beginning back to the early 1960s. At that time several research groups had invented a new technique, called packet switching, to transmit information as an efficient and robust alternative for circuit switching which is the transmission method used by the telephone network. Packet switching is the transmission method used today by the Internet. James F. Kurose and Keith W. Ross’ excellent book Computer Networking: A Top-Down Approach explains and uses the Internet’s architecture and protocols as primary vehicles for studying fundamental computer networking concepts [31].

      In 1995, Gordon Moore published an article predicting exponential growth in the density of transistors in integrated circuits [37]. СКАЧАТЬ