Название: Chance, Calculation and Life
Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119823957
isbn:
In this classical framework, a random event has a cause, yet this cause is below measurement. Thus, Curie’s principle3 is preserved: “the asymmetries of the consequences are already present in the causes” or “symmetries are preserved” – the asymmetries in the causes are just hidden.
For decades, Poincaré’s approach was quoted and developed by only a few, that is, until Kolmogorov’s work in the late 1950s and Lorentz in the 1960s. Turing is one of these few: he based his seminal paper on morphogenesis (Turing 1952) on the nonlinear dynamics of forms generated by chemical reactants. His “action/reaction/diffusion system” produced different forms by spontaneous symmetry breaking. An early hint of these ideas is given by him in Turing (1950, p. 440): “The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping”. This Poincarian remark by Turing preceded by the famous “Lorentz butterfly effect” (proposed in 1972) by 20 years on the grounds of Lorentz’s work from 1961.
Once more, many like to call this form of classical randomness “epistemic” unpredictability, that is related to our knowledge of the world. We do not deal with ontologies here, although this name may be fair, with the distinction from the understanding of randomness as a very weak form of unpredictability proposed by Spinoza. Poincaré brought a fact known since Galileo into the limelight: classical measurement is an interval, by principle4. Measurement is the only form of access we have to the physical world, while no principle forbids, a priori, to join two independent Spinozian dynamics. That is, even epistemic, classical physics posits this limit to access and knowledge as a priori measurement. Then this lack of complete knowledge yields classical randomness, typically in relation to a nonlinear mathematical modeling, which produces either positive Lyapunov exponents or, more strongly, non-analyticity. In other words, classical systems (as well as relativistic ones) are deterministic, yet they may be unpredictable, in the sense that randomness is not in the world nor it is just in the eyes of the beholder, but it pops out at the interface between us and the world by theory and measurement.
By “theory” we mean the equational or functional determination, possibly by a nonlinear system of equations or evolution functions.
1.3. Quantum randomness
Quantum randomness is hailed to be more than “epistemic”, that is, “intrinsic” (to the theory). However, quantum randomness is not part of the standard mathematical model of the quantum which talks about probabilities, but is about the measurement of individual observables. So, to give more sense to the first statement we need to answer (at least) the following questions: (1) What is the source of quantum randomness? (2) What is the quality of quantum randomness? (3) Is quantum randomness different from classical randomness?
A naive answer to (1) is to say that quantum mechanics has shown “without doubt” that microscopic phenomena are intrinsically random. For example, we cannot predict with certainty how long it will take for a single unstable atom in a controlled environment to decay, even if one has complete knowledge of the “laws of physics” and the atom’s initial conditions. One can only calculate the probability of decay in a given time, nothing more! This is intrinsic randomness guaranteed.
But is it? What is the cause of the above quantum mechanical effect? One way to answer is to consider a more fundamental quantum phenomenon: quantum indeterminism. What is quantum indeterminism and where does it come from? Quantum indeterminism appears in the measurement of individual observables: it has been at the heart of quantum mechanics since Born postulated that the modulus-squared of the wave function should be interpreted as a probability density that, unlike in classical statistical physics (Myrvold 2011), expresses fundamental, irreducible indeterminism (Born 1926). For example, the measurement of the spin, “up or down”, of an electron, in the standard interpretation of the theory, is considered to be pure contingency, a symmetry breaking with no antecedent, in contrast to the causal understanding of Curie’s principle5. The nature of individual measurement outcomes in quantum mechanics was, for a period, a subject of much debate. Einstein famously dissented, stating his belief that “He does not throw dice” (Born 1969, p. 204). Over time the assumption that measurement outcomes are fundamentally indeterministic became a postulate of the quantum orthodoxy (Zeilinger 2005). Of course, this view is not unanimously accepted (see Laloë 2012).
Following Einstein’s approach (Einstein et al. 1935), quantum indeterminism corresponds to the absence of physical reality, if reality is what is made accessible by measurement: if no unique element of physical reality corresponding to a particular physical observable (thus, measurable) quantity exists, this is reflected by the physical quantity being indeterminate. This approach needs to be more precisely formalized. The notion of value indefiniteness, as it appears in the theorems of Bell (Bell 1966) and, particularly, Kochen and Specker (1967), has been used as a formal model of quantum indeterminism (Abbott et al. 2012). The model also has empirical support as these theorems have been experimentally tested via the violation of various inequalities (Weihs et al. 1998). We have to be aware that, going along this path, the “belief” in quantum indeterminism rests on the assumptions used by these theorems.
An observable is value definite for a given quantum system in a particular state if the measurement of that observable is pre-determined to take a (potentially hidden) value. If no such pre-determined value exists, the observable is value indefinite. Formally, this notion can be represented by a (partial) value assignment function (see Abbott et al. (2012) for the complete formalism).
When should we conclude that a physical quantity is value definite? Einstein, Podolsky and Rosen (EPR) defined physical reality in terms of certainty of predictability in Einstein et al. (1935, p. 777):
If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity.
Note that both allusions to “disturbance” and to the (numerical) value of a physical quantity refer to measurement as the only form of access to reality we have. Thus, based on this accepted notion of an element of physical reality, following (Abbott et al. 2012) we answer the above question by identifying the EPR notion of an “element of physical reality” with “value definiteness”:
EPR principle: If, without disturbing a system in any way, we can predict with certainty the value of a physical quantity, then there exists a definite value prior to the observation corresponding to this physical quantity.
The EPR principle justifies:
Eigenstate principle: a projection observable corresponding to the preparation basis of a quantum state is value definite.
The СКАЧАТЬ