Computation in Science (Second Edition). Konrad Hinsen
Чтение книги онлайн.

Читать онлайн книгу Computation in Science (Second Edition) - Konrad Hinsen страница 6

Название: Computation in Science (Second Edition)

Автор: Konrad Hinsen

Издательство: Ingram

Жанр: Программы

Серия: IOP ebooks

isbn: 9780750332873

isbn:

СКАЧАТЬ though there is computation going on under the hood. Today’s computers are as much communication devices as computation devices.

      Richard Feynman had written on his blackboard: ‘What I cannot create, I do not understand.’ We cannot understand a theory, a model, or an approximation, unless we have done something with it. One way to gain experience with mathematical models is to apply them to concrete situations. Another one, even more powerful, is to implement them as computer programs. Donald Knuth has expressed this very succinctly [4]:

      It has often been said that a person does not really understand something until he teaches it to someone else. Actually a person does not really understand something until he can teach it to a computer, i.e. express it as an algorithm. The attempt to formalize things as algorithms leads to a much deeper understanding than if we simply try to comprehend things in the traditional way.

      The utility of writing programs for understanding scientific concepts and mathematical models comes from the extreme rigor and precision required in programming. Communication between humans relies on shared knowledge, starting with the definition of the words of everyday language. A typical scientific article assumes the reader to have significant specialist knowledge in science and mathematics. Even a mathematical proof, probably the most precise kind of statement in the scientific literature, assumes many definitions and theorems to be known by the reader, without even providing a list of them. A computer has no such prior knowledge. We must communicate with a computer in a formal language which is precisely defined, i.e. there are clear rules, verifiable by a computer, that define what is and what isn’t a valid expression in the language. Every aspect of our science that somehow impacts a computed result must be expressed in this formal language in order to obtain a working computer program.

      Another reason why writing a program is often useful for understanding a mathematical model is that an algorithm is necessarily constructive. In the physical sciences, most theories take the form of differential equations. These equations fully define their solutions, and are also useful for reasoning about their general properties, but provide no obvious way of finding one. Writing a computer program requires, first of all, to think about what this program does, and then about how it should go about this.

      Implementing a scientific model as a computer program also has the advantage that, as a bonus, you get a tool for exploring the consequences of the model for simple applications. Computer-aided exploration is another good way to gain a better understanding of a scientific model (see [5, 6] for some outstanding examples). In the study of complex systems, with models that are directly formulated as algorithms, computational exploration is often the only approach to gaining scientific insight [7].

      A computer program that implements a theoretical model, for example a program written with the goal of understanding this model, is a peculiar written representation of this model. It is therefore an expression of scientific knowledge, much like a textbook or a journal article. We will see in the following chapters that much scientific knowledge can be expressed in the form of computer programs, and that much of today’s scientific knowledge exists in fact only in the form of computer programs, because the traditional scientific knowledge representations cannot handle complex structured information. This raises important questions for the future of computational science, which I will return to in chapter 7.

      Computers are physical devices that are designed by engineers to perform computation. Many other engineered devices perform computation as well, though usually with much more limited capacity. The classic example from computer science textbooks is a vending machine, which translates operator input (pushing buttons, inserting coins) into actions (deliver goods), a task that requires computation. Of course a vending machine does more than compute, and as users we are most interested in that additional behavior. Nevertheless, information processing, and thus computation, is an important aspect of the machine’s operation.

      The same is true of many systems that occur in nature. A well-known example is the process of cell division, common to all biological organisms, which involves copying and processing information stored in the form of DNA [8]. Another example of a biological process that relies on information processing is plant growth [9]. Most animals have a nervous system, a part of the body that is almost entirely dedicated to information processing. Neuroscience, which studies the nervous system, has close ties to both biology and computer science. This is also true of cognitive science, which deals with processes of the human mind that are increasingly modeled using computation.

      Of course, living organisms are not just computers. Information processing in organisms is inextricably combined with other processes. In fact, the identification of computation as an isolated phenomenon, and its realization by engineered devices that perform a precise computation any number of times, with as little dependence on their environment as is technically possible, is a hallmark of human engineering that has no counterpart in nature. Nevertheless, focusing on the computational aspects of life, and writing computer programs to simulate information processing in living organisms, has significantly contributed to a better understanding of their function.

      On a much grander scale, one can consider all physical laws as rules for information processing, and conclude that the whole Universe is a giant computer. This idea was first proposed in 1967 by German computer pioneer Konrad Zuse [10] and has given rise to a field of research called digital physics, situated at the intersection of physics, philosophy, and computer science [11].

      What I have discussed above, and what I will discuss in the rest of this book, is computation in the tradition of arithmetic and Boolean logic, automated by digital computers. There is, however, a very different approach to tackling some of the same problems, which is known as analog computing. Its basic idea is to construct systems whose behavior is governed by the mathematical relations one wishes to explore, and then perform experiments on these systems. The simplest analog computer is the slide rule, which was a common tool to perform multiplication and division (plus a few more complex operations) before the general availability of electronic calculators.

      Today, analog computers have almost disappeared from scientific research, because digital computers perform most tasks better and at a much lower cost. This is also the reason why this book’s focus is on digital computing. However, analog computing is still used for some specialized applications. More importantly, the idea of computation as a form of experiment has persisted in the scientific community. Whereas I consider it inappropriate in the context of software-controlled digital computers, as I will explain in section 5.1, it is a useful point of view to adopt in looking at emerging alternative computing techniques, such as artificial neural networks.

      Computation has its roots in numbers and arithmetic, a story that is told by Georges Ifrah in The universal History of Numbers [СКАЧАТЬ