Computation in Science (Second Edition). Konrad Hinsen
Чтение книги онлайн.

Читать онлайн книгу Computation in Science (Second Edition) - Konrad Hinsen страница 4

Название: Computation in Science (Second Edition)

Автор: Konrad Hinsen

Издательство: Ingram

Жанр: Программы

Серия: IOP ebooks

isbn: 9780750332873

isbn:

СКАЧАТЬ dictionary definitions of computation are remarkably imprecise. Oxford proposes ‘the action of mathematical calculation’ or ‘the use of computers, especially as a subject of research or study’. Merriam-Webster has ‘the act or action of computing: calculation’ or ‘the use or operation of a computer’ but also ‘a system of reckoning’. Both dictionaries refer to ‘calculation’, but do not provide useful definitions for that word either. The vagueness of these dictionary definitions can be traced back to the long-lasting confusion about what computation is and how it relates to mathematics. It was only the development of formal logic and mathematical formalism in the early 20th century that led to a precise definition of computation, which helped to pave the way to the development of automatic computing machines. This definition is the topic of section 1.1.

      Beyond the exploration of what defines computation, I will also look at what computation is for scientists, i.e. why computation is so important in scientific research. Most scientists probably think of computers and computation as tools that they use to process experimental data or mathematical equations. However, there are other roles that computation plays in science, and which I will briefly discuss in section 1.2.

      1.1.1 Numerical computation

      The most familiar computations are the numerical calculations that we all do in everyday life: adding up prices, multiplying the length and width of a room to compute its surface, dividing a quantity into equal parts, etc. Most people today have even more occasions for doing numerical calculations in their professional lives. Basic arithmetic on anything that can be quantified is so fundamental that it takes up a significant part of training in the first years of school. Mechanical aids for numerical operations, such as the abacus, have been used for at least 2000 years and perhaps even for much longer.

      We do not think much about how we do simple arithmetic operations on small numbers, and in fact we often just recall the result that we have internalized due to frequent use. But as soon as we work on larger numbers, mental calculation becomes a mechanical activity based on rules we have learned. An example for such a rule is: to multiply a number by 9, multiply it by ten and then subtract the original number.

      When operations become too complex to be handled in the head, we turn to pen and paper for a more reliable record of the intermediate results in our calculations. A large number of calculation techniques with pen and paper have been developed in the course of the centuries, ranging from addition via long division to the calculation of cube roots.

      As a simple example, consider adding the numbers 173 and 51. One way to do it systematically starts by writing one below the other, adding zeros to the left of the smaller number to make the number of digits equal:

      173051

      We then process the digits from right to left, starting by adding 3 and 1. We ‘know’ that the result is 4, because we have done this so often. But for the slightly more complex operation of multiplication, most readers probably remember how they memorized multiplication tables in school—tables that were actually printed in books. To be precise in our description of addition, we will use an equally explicit addition table for one-digit integers:

      012345678900123456789112345678902234567890133456789012445678901235567890123466789012345778901234568890123456799012345678

      We actually need two such tables, the second one being for the ‘carry’ digit, which is 1 when the sum of the two digits is 10 or more, and which is 0 otherwise. We note the one-digit sum and the carry digit, and move on to the left to handle the next digit:

      17305104→173051124→173051224

      This method, and all the other arithmetic operations we use, rely on the positional notation for numbers that is used all around the world today. Any natural number can be written as a sequence of the digits 0 to 9. Another symbol, the minus sign, takes care of negative integers, an one further symbol, either the decimal point or the division slash, makes it possible to express fractions. The rules for arithmetic can then be formulated as rules for manipulating sequences of symbols, as shown above for addition, which can be applied mechanically.

      It is indeed important to realize that the method outlined above does not work on numbers, but on a specific representation for numbers. Numbers are an abstract concept, which cannot be manipulated using mechanical rules. Different representations lead to different methods for doing arithmetic. Even though the decimal character string ‘42’, the roman-numeral character string ‘XLII’, and the English-language character string ‘forty-two’ refer to the same number, they cannot be manipulated in the same way. In fact, our recipe for addition never refers to numbers. It takes two sequences of digits as input, and produces one sequence of digits as output. Applying the recipe does not require any knowledge of numbers, only the capacity to work with a finite set of symbols and apply rules to them.

      A recipe for solving a specific problem by manipulating symbols is called an algorithm. The word is derived from the name of the Persian mathematician al-Khwārizmı̄ who lived in the 9th century. His book describing the ‘Indian numbers’, which today we call Arabic numerals, introduced our modern decimal notation and its rules for arithmetic into Europe [1]. The use of this system was called ‘algorism’ in the late middle ages, and later the spelling and meaning transformed into today’s ‘algorithm’. The positional notation for numbers transformed arithmetic from a difficult craft performed by trained specialists into a routine task that could be mastered by almost everyone.

      Today the decimal representation of numbers seems so obvious to us that we often make no difference between a number and its decimal representation. This phenomenon is not limited to numbers. We rarely distinguish carefully between a word and its meaning either, and in quantum physics, to cite an example from science, the confusion between a quantum state and one of its many possible representations is very common. When thinking about computation, it is often important to recall that the Universe of symbols and the Universe of meanings are separate. In computer science, this is known as the distinction between syntax and semantics. Syntax defines which sequences of symbols a particular algorithm deals with, for example ‘a sequence of any number of the digits 0 to 9’. Semantics defines how such sequences of symbols are interpreted, such as ‘a natural number’. No knowledge of semantics is needed to apply our algorithm for adding two natural numbers, but it is essential to understand what the algorithm does, and in particular which problems it can help to solve.

      A symptom of the confusion between numbers and their representations is the popular saying that ‘computers work only on numbers’. This is patently false: what today’s digital computers work on is sequences of bits, a bit being a symbol from an alphabet containing two symbols. We often choose the digits 0 and 1 to represent this alphabet, suggesting an interpretation as binary numbers, i.e. numbers represented in a positional notation with base 2. The idea that computers work on numbers is mistaken because bits can equally well represent information other than numbers. It is also misleading in another way because it suggests that any number-related problem can be solved by a computer. However, most numbers cannot be represented by sequences of bits and therefore cannot enter in any computation.

      It is easy to see that bits can represent any information at all that can be written as sequences of symbols. Suppose we have an alphabet with N symbols, for example the N = 26 letters of the English alphabet. We can then make up a translation table that assigns a unique set of values for five bits to each symbol in our alphabet. With five bits, we have 25=32 distinct values, so six values will be СКАЧАТЬ