Название: Chance, Calculation and Life
Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119823957
isbn:
The repetition of E must follow an algorithmic procedure for resetting and repeating the experiment; generally, this will consist of a succession of events, with the procedure being “prepared, performed, the result (if any) recorded and E being reset”.
The definition above captures the need to avoid correct predictions by chance by forcing more and more trials and predictions. If PE is k-correct for ξ then the probability that such a correct sequence would be produced by chance
is bounded by hence, it tends to zero when k goes to infinity.The confidence we have in a k-correct predictor increases as k approaches infinity. If PE is k-correct for ξ for all k, then PE never makes an incorrect prediction and the number of correct predictions can be made arbitrarily large by repeating E enough times. In this case, we simply say that PE is correct for ξ. The infinity used in the above definition is potential, not actual: its role is to arbitrarily guarantee many correct predictions.
This definition of correctness allows PE to refrain from predicting when it is unable to. A predictor PE which is correct for ξ is, when using the extracted information ξ (λ), guaranteed to always be capable of providing more correct predictions for E, so it will not output “prediction withheld” indefinitely. Furthermore, although PE is technically only used a finite, but arbitrarily large, number of times, the definition guarantees that, in the hypothetical scenario where it is executed infinitely many times, PE will provide infinitely many correct predictions and not a single incorrect one.
Finally, we define the prediction of a single bit produced by an individual trial of the experiment E. The outcome x of a single trial of the experiment E performed with parameter λ is predictable (with certainty) if there exist an extractor ξ and a predictor PE which is correct for ⇠ and PE (ξ (λ)) = x.
By applying the model of unpredictability described above to quantum measurement outcomes obtained by measuring a value indefinite observable, for example, obtained using Theorem 1.1, we obtain a formal certification of the unpredictability of those outcomes:
THEOREM 1.3. (Abbott et al. 2015b) – Assume the EPR and Eigenstate principles. If E is an experiment measuring a quantum value indefinite observable, then for every predictor PE using any extractor ξ PE is not correct for ξ
THEOREM 1.4. (Abbott et al. 2015b) – Assume the EPR and Eigenstate principles. In an infinite independent repetition of an experiment E measuring a quantum value indefinite observable which generates an infinite sequence of outcomes x = x1x2…, no single bit xi can be predicted with certainty.
According to Theorems 1.3 and 1.4, the outcome of measuring a value indefinite observable is “maximally unpredictable”. We can measure the degree of unpredictability using the computational power of the predictor.
In particular, we can consider weaker or stronger predictors than those used in Theorems 1.3 and 1.4, which have the power of a Turing machine (Abbott et al. 2015a). This “relativistic” understanding of unpredictability (fix the reference system and the invariant preserving transformations is the approach proposed by Einstein’s relativity theory) allows us to obtain “maximal unpredictability”, but not absolutely, only relative to a theory, no more and no less. In particular, and from this perspective, Theorem 1.3 should not be interpreted as a statement that quantum measurement outcomes are “true random”7 in any absolute sense: true randomness – in the sense that no correlations exist between successive measurement results – is mathematically impossible as we will show in section 1.5 in a “theory invariant way”, that is, for sequences of pure digits, independent of the measurements (classical, quantum, etc.) that they may have been derived from, if any. Finally, question (3) will be discussed in section 1.6.2.
1.4. Randomness in biology
Biological randomness is an even more complex issue. Both in phylogenesis and ontogenesis, randomness enhances variability and diversity; hence, it is core to biological dynamics. Each cell reproduction yields a (slightly) random distribution of proteomes8, DNA and membrane changes, both largely due to random effects. In Longo and Montévil (2014a), this is described as a fundamental “critical transition”, whose sensitivity to minor fluctuations, at transition, contributes to the formation of new coherence structures, within the cell and in its ecosystem. Typically, in a multicellular organism, the reconstruction of the cellular micro-environment, at cell doubling, from collagen to cell-to-cell connection and to the general tensegrity structure of the tissular matrix, all yield a changing coherence which contributes to variability and adaptation, from embryogenesis to aging. A similar phenomenon may be observed in an animal or plant ecosystem, a system yet to be described by a lesser coherence of the structure of correlations, in comparison to the global coherence of an organism9.
Similarly, the irregularity in the morphogenesis of organs may be ascribed to randomness at the various levels concerned (cell reproduction and frictions/interactions in a tissue). Still, this is functional, as the irregularities of lung alveolus or of branching in vascular systems enhance ontogenetic adaptation (Fleury and Gordon 2012). Thus, we do not call these intrinsically random aspects of onto-phylogenesis “noise”, but consider them as essential components of biological stability, a permanent production of diversity (Bravi Longo 2015). A population is stable because it is diverse “by low numbers”: 1,000 individuals of an animal species in a valley are more stable if they are diverse. From low numbers in proteome splitting to populations, this contribution of randomness to stability is very different from stability derived from stochasticity in physics, typically in statistical physics, where it depends on huge numbers.
We next discuss a few different manifestations of randomness in biology and stress their positive role. Note that, as for the “nature” of randomness in biology, one must refer, at least, to both quantum and classical phenomena.
First, there exists massive evidence of the role of quantum random phenomena at the molecular level, with phenotypic effects (see Buiatti and Longo 2013 for an introduction). A brand new discipline, quantum biology, studies applications of “non-trivial” quantum features such as superposition, non-locality, entanglement and tunneling to biological objects and problems (Ball 2011). “Tentative” examples include: (1) the absorbance of frequency-specific radiation, i.e. photosynthesis and vision; (2) the conversion of chemical energy into motion; and, (3) DNA mutation and activation of DNA transposons.
In principle, quantum coherence – a mathematical invariant for the wave function of each part of a system – would be destroyed almost instantly in the realm of a cell. Still, evidence of quantum coherence was found in the initial stage of photosynthesis (O’Reilly and Olaya-Castro 2014). Then, the problem remains: how can quantum coherence last long enough in a poorly controlled environment at ambient temperatures to be useful in photosynthesis? The issue is open, but it is possible that the organismal context (the cell) amplifies quantum phenomena by intracellular forms of “bio-resonance”, a notion defined below.
Moreover, it has been shown that double proton transfer affects spontaneous mutation in RNA duplexes (Kwon and Zewail 2007). This suggests that the “indeterminism” in a mutation may also be given by quantum randomness amplified by classical dynamics (classical randomness, see section СКАЧАТЬ