Timeless Variation: A Contribution to the Metaphysics of Quantum Gravity (2/2)
The Second Condition of Any Physical Variation Is that There Is Always a Continuous Transition Between Two States of Variation (Condition of Continuity)
1 Although there really is a change of state from A to A’ and from A’ to A”, there is however no discontinuity between these two states: no “hole,” “break,” or “gap” in the change or transition itself. The distance separating two states of variation could always be made infinitely small, i.e. ultimately reduced to nothingness, so that nothing disrupts the variation. Between the 17th and XIX^{th} centuries, physicists and mathematicians gave a rigorous definition to these intuitive conceptualizations, which resulted in the construction of real numbers and therefore the real number line, i.e. – in Georg Cantor’s terms – the construction of the “power of the continuum” that real numbers and the real number line both possess. But the source of these mathematical developments is a tool that Leibniz and Newton created to establish the computational power of the emerging field of classical mechanics: infinitesimal calculus. What soon became a distinct field of mathematics – mathematical analysis – was, however, based entirely on a hypothesis that had never been closely examined before the advent of the theories of quantum gravity: the hypothesis of the continuity of time and, consequently, of any variation that took place within it.
2 The idea that any variation is continuous is not at all intuitively obvious: it arises from a theoretical construction. The proof of this is that before the invention of differential calculus, thought was strangely helpless when faced with Zeno’s paradoxes, which aimed to establish the fundamental inconceivability of movement. These paradoxes all obey the same implicit logic, which Bergson was able to analyze in detail: they demonstrate that (1) by necessity, all movement may be broken down into an infinite number of immobile positions; (2) however, it is impossible to reconstitute the movement on the basis of those positions. [1] We know that the essential contribution of differential calculus was to transform the indefinitely divisible into finite limits: when the value of a variable, for example the distance between two points, tends toward 0, the relation of this variable to another (in other words the function that links them) tends toward a finite limit. Fermat had anticipated the modern formulation of the notion of limits, but Newton and Leibniz were the ones who foregrounded this relation by conceiving it as one between infinitesimal quantities that vanish: dy/dx. The later interpretation of these “quantities” then shows, as Deleuze recalls, that “[t]he relation dy/dx is not like a fraction which is established between particular quanta in intuition, but neither is it a general relation between variable algebraic magnitudes or quantities. Each term exists absolutely only in its relation to the other: it is no longer necessary, or even possible, to indicate an independent variable.” [2] And yet the calculation of the “fluents” (the “gradual” variables) and “fluxions” (the rate of change of the variables), which is associated with Newton and all of the classical mechanics that came after him, does tend to have the time variable playing this irreconcilable double role: although it is uniquely defined in and through this relation as a continuous variable, it also exists independently of it as an independent variable. And yet if the limit of the relation between variables ensures the continuity of the curves delineating themselves within the time variable (as a function is only continuous at a given point if and only if it admits a limit at that point), what could the continuity of that variable itself be based on, other than that same relation, that same calculation of limits?
3 Newton still recognized that this intrinsic continuity of time had to be explicitly asserted, i.e. presupposed as such. The construction of the real number field and the mere identification of the time variable to a real variable and to the real number line has tended to obscure that presupposition since then. And yet the real numbers are themselves defined by the complete, ordered, and commutative set of the limits of rational number sequences (known as Cauchy sequences), and therefore by the same underlying way in which the variables are placed in relation to each other. All of the mathematical tools that were created later on to apprehend continuity and guarantee the possibility of continuous functions (topology, metrics, etc.) continue to rely upon that implicit presupposition by studying the intervals between limits (notably conceived independently of any notion of distance, in the case of topology). Einstein’s twopart revolution, which is fundamentally a revolution of metrics – combining time and space (the Lorentzian signature) then including the effects of matterenergy (the metric tensor) – thus continues to conceive of spacetime as a continuum that mathematically encodes the idea not only of topological manifolds but also of metric, and therefore differentiable, manifolds.
4 But didn’t quantum mechanics bring about this reassessment of the hypothesis of the continuity of spacetime, and therefore of time in particular? At first glance, this is not the case: quantification does not necessarily mean discretization. Planck’s constant does discretize some variables or operators such as action, energy, etc., whose spectrum of values or possible states is therefore encoded in a vector space (Hilbert space), i.e. in a given set of orthogonal vectors on which the space is based. However, the variable position that determines the simultaneous localization of these states in classical space, and Schrödinger’s (differential) equation that determines the linear transformation of the state vector or the operators in classical time, remain perfectly continuous. And yet quantum mechanics also draws on two fundamental principles that tend to liberate the variations it handles from this condition of continuity. First, Heisenberg’s uncertainty (or, more exactly, indeterminacy) principle, which translates the noncommutativity of what are known as the conjugate variables (in particular energy and time) leads to the constant insertion of a relative discontinuity within the continuum of time, even a short one (since the duration is inversely proportional to the quantity of energy that is spontaneously created, violating the law of conservation, in accordance with this uncertainty relation – among others – between time and energy). Second, the principle (which should rather be called a postulate) of wavepacket reduction (translating the projection of the state vector onto one of its proper states), which takes place instantaneously, beyond time (unlike decoherence, which takes place progressively, within time), causes the evolution of the wave function to lose its linear character and introduces a partial contingency that is subject to the laws of probability (the Born rule). Each case demonstrates that something may vary in a physically calculable and testable way without varying continuously. Fluctuations and reductions do not just insert themselves into the timeframe: they move fundamentally beyond it. The first do so by violating the law of conservation of energy, while the second violate the linear evolution of any variable in time.
5 Although quantum mechanics thereby brings the spacetime continuum into tension, it also retains it as a medium for its events – as demonstrated by the quantum field theory. The theory of loop quantum gravity has ultimately shattered that continuum for good. This time, the quantification of space itself leads to the discretization of its measurement (distance, area, volume, etc.) Space thus becomes the smooth approximation of a web comprising spin networks (that become spin foam over time) whose dimensionality, which was probably initially infinite in theory, is effectively reduced to a threepart neighboring relation. [3] But what then of the inevitable correlative discretization of time? This idea of “atomic time” is necessarily surrounded by a great deal of conceptual fuzziness. It entails a number of pitfalls to avoid. The first one just comes from the fact whose history we have just retraced: the continuity of time is part of its very definition. It is a variable that was constructed to be continuous and to thereby endow everything that varies within it with that same continuity. The second pitfall to avoid is to believe that quantification always retains the nature of what is quantified. In fact, spin networks have absolutely nothing to do with discontinuous pieces of space. Similarly, hypothetical quanta of time (lasting one unit of Planck time) could never be likened to discontinuous pieces of time but would by necessity be of a different nature from time. This is why it is important to consider the idea of a variation more fundamental than time, whose condition of continuity may be lifted without contradiction: it would have no need to be related to a real or continuous underlying variable. In our example, A, A’ and A” thus appear as “quanta of variation,” as the transition from one to the next is irreducibly discrete. But the third pitfall is to imagine that this discontinuity could be nonradical, in other words that it could continue to be reduced to mere fluctuations or reductions that become those of smooth spacetime itself. Actually, this idea is contradictory, since the fluctuations are always defined in relation to a continuous timeframe that they partially interrupt, a timeframe that is thereby reintroduced surreptitiously under the guise that one can do without it.
6 In reality, it is not possible to insert an irreducible gap between two states of variation of any kind, without being obliged to deduce radical consequences – without therefore entering a world of constant disjunctions in which, from the start, no persistence of any sort would occur, a world that no differential equation or real variable could adequately describe. Even more fundamentally, such a world would lose all necessity: it would be a world where every causal chain is, in principle, radically contingent. It is the world that Hume brought to light by destroying any basis for a necessary connection between cause and effect and that Kant, in turn, illustrated perfectly with the example of cinnabar changing color at every moment, all the better to ward off the transcendental possibility of such a thing happening. Today, Quentin Meillassoux, revisiting “Hume’s problem,” explores this world for its own sake by taking as his starting point the principle that no other necessity is conceivable or provable beyond the single fact of contingency, resulting in the idea of “hyperChaos,” by virtue of which anything may come after anything else. [4] A variation that is liberated from the condition of continuity constituting all temporality therefore possesses, by necessity, a contingency of this kind, one that is in fact always partial and conditional, but in theory radical and unconditional.
The Third Condition of Any Physical Variation Is That Its States Are Produced Successively Without the Ability to Coexist Simultaneously (Condition of Succession)
7 If A becomes A’ and A”, this means that we are dealing with three values of one variation, and therefore of one variable. And yet, due to the simple fact that they are distinct, these values could only come one after the other. Even if the two first presupposed conditions of any variation are lifted, i.e. even if a variation is conceived as being able to vary without the least invariant or the least continuity, a third restrictive condition can subsist as such – and even seems required to do so, despite everything: the “condition of succession.” This condition is probably the most fundamental and the most inextricable property of time. Even Meillassoux’s hyperChaos, as a principle of radical contingency, does not seem to be exempt from it: states of variation that no longer mark out any possible line of continuity, removing any subsistence of time, will in this case nevertheless continue to follow in succession throughout a (purely formal, contentless) period of time to which such a hyperChaos ultimately remains equivalent. But what does the fact that states of variation follow each other fundamentally mean? And what would stand in the way of conceiving of a variation that was not only successive but first of all, and more fundamentally, superposed?
8 Our problem here is not to question the objective or illusory reality of the succession of distinct states, but rather to question the very necessity of their succession. And yet a detour by way of the first problem is necessary for a better comprehension of the specificity and profundity of the second. Asserting that things really – and not illusorily – follow one after the other – amounts to asserting that “nowness” or “presentness” exists – but not, as we have seen, that only the present exists or that such a present exists universally. And yet, in our view, Bergson seemed to have shown once and for all why states of variation really do follow each other: quite simply because like a film screening with no film, they have no extra space, no film provided, in which to unfold, to spread. By necessity, therefore, they have to replace each other. [5] As a result, nothing is easier than to really engender a present, i.e. to move from a purely causal order involving no present point of view (McTaggart’s B series) to a series that does involve one (the A series). [6] All it takes is to conceive of a variation with no extra dimension, a variation that can only vary within itself, with the result that its most recent state necessarily replaces the preceding one.
9 As decisive as it is, the Bergsonian argument nevertheless involves a presupposition that is as massive as it is imperceptible: why do these states have to follow each other, and replace each other? Why couldn’t they add themselves to each other, “overprinting” each other in a way (a coexistence that Bergson presents as another characteristic of duration conceived as memory and not just as variation in itself)? Quite simply because of the principle of contradiction that its argument implicitly but absolutely requires in order to function: it is not just because they have no allotted extra place but also because they are mutually contradictory, mutually incompatible, because they cannot exist simultaneously and therefore replace each other out of necessity, producing an effect of succession. Here we return to the close relationship between time and the principle of contradiction that philosophy has repeatedly highlighted throughout its history. An attribute, as Aristotle sees it, can both belong to something and not belong to it, as long as this twofold attribution does not take place at the same time: [7] in other words, as long as these two contradictory properties follow each other, precisely defining a becoming of the thing in question that avoids the contradictory nature of Heraclitean becoming. The classical conception of time, freed from any particular becoming, will take on the exact same role that Gödel set out in these terms: “Time is the means by which God achieved the inconceivable – that P and nonP are both true.” [8] As long as the contradiction of the terms (characterized in the strongest sense – with the added involvement of the law of excluded middle – by the necessary alternative between the two, and in the weakest sense by their mere incompatibility) has “the space of time,” so to speak, in which to unfold, the principle of contradiction is safe. Consequently, what is at stake with Hegelian dialectics becomes less a matter of refusing the principle of contradiction and more one of reaffirming its indissoluble link with time. Contradiction is what all temporality is actually based on and what makes becoming necessary: conversely, the unfolding of time ensures the truthfulness of the “instantaneous” principle of contradiction.
10 Why, then, couldn’t the states of a given thing, which were by necessity alternative or merely incompatible, coexist without needing to exclude or replace each other, successively or dialectically? Whereas some logics (known as paraconsistent logics) tend to deny the principle of contradiction, they can only do so at the cost of numerous contortions (to avoid the correlative principle of “explosion,” which enables the deduction of anything from a contradiction). But Nature itself can effortlessly achieve what logic cannot (or only with difficulty). In fact, quantum formalism contains a principle within itself that tends to violate the principle of contradiction: the superposition principle. This principle, which follows naturally from the vector space used in this formalism, stipulates that at the quantum scale, the addition of two or more possible distinct states of one variable (which is then called an observable and assumes the form of an operator that takes any state vector of a Hilbert space and breaks it down into a combination of proper states) is also a possible state of that variable. These distinct states are then referred to as superposed states: in other words, they coexist in reality [9] for a system and a variable as considered from a point of view that always remains external to them, [10] whatever their presumed incompatibility in classical (nonquantum) terms may be. These states include: spin up and spin down according to a chosen axis of rotation; a position x, x’ and x”, etc. for an isolated “particle”; a number – 0, 1 and 2, etc. – of “particles” for a defined space (making use of a Fock space, a tensor product of Hilbert spaces), etc. [11] Any superposition of states can then be seen, in theory, as exempt from the principle of contradiction, and therefore from the successive nature of time – and viceversa. But any superposition of states still undergoes timedependent changes as well, in a successive and unitary fashion, obeying the equation of Schrödinger mentioned earlier. Therefore, any quantum system constitutes one or several states (themselves superposed) of superposed variation (what mathematicians call a spectrum) that are partially subject to successive or temporal states of variation – which is implicitly indicative of a twofold relationship to time where the contradiction is not yet blatant.
11 However, it becomes blatant with the (objective) projection of the state vector onto a proper vector (taking place in particular during a measurement), which – as we have seen – instantaneously divests Schrödinger’s equation of its linearity as well as its unitarity, only retaining a measured value, in other words a unique value, ruling out any other. But this socalled “measurement problem” is no longer a problem if one understands that it reveals the interaction or encounter between a variation that is not subject to the constituent constraints of time and a time that is subject to those constraints, in particular the principle of contradiction inferred by the condition of succession. Each point of spacetime, which in a way carries with it the principle of contradiction, would thus manifest itself as a principle of exclusion. Measuring means putting a question to a system (as any system does in relation to any other with which it objectively interacts), thereby constraining it to respond at a given moment: the moment of the question asked of it. In other words, it means constraining the system to fit one response into a given moment, from a given spatiotemporal point of view that becomes, in a way, internal to the system to be formed by the initial system and what measures it (or what it interacts with). While this response may, under some conditions (operators that commute or not), take multiple forms, it will in all cases rule out any other that was superposed on it. Conversely, in the case of Schrödinger’s equation, no exclusive question is asked of the system: the (consequently unitary) evolution of its state of superposition is determined from an external point of view (that of linear time). But the system is not constrained to determine itself in and for a given time (or point of view): it is not assigned a spatiotemporal determination, “a local beable,” [12] that becomes internal to it in a way and divests it, by necessity, of its unitarity. [13]
12 Other observations resulting from the formalism that is used (Hilbert and Fock spaces) confirm that quantum systems have no spatiotemporal localization in themselves: this is the “quantum entanglement” that simply translates the nonlocalizability and nonseparability of the states of a state vector. These states remain nonlocalizable and nonseparable even if they are possessed by two distinct “elements” (known for the sake of convenience, though somewhat awkwardly, as “particles”), which may be both spatially separate (in the case of “EPR” particles) and temporally separate (in “delayedchoice” experiments), with no possibility of material interaction. Alain Connes, using this quantum formalism that he sets on new mathematical foundations, has thus concluded that physicists (as well as philosophers of science) “are mistaken when they try to situate quantum variability within the flow of time.” [14] This is another way of saying that quantum variation, which is intrinsically neither local nor successive, must not be confused with the local and successive spacetime to which it belongs when it is led to interact with the environment and therefore to produce measurable effects. We may add that the loop theorists go one step further in this logic by imagining the theoretical entanglement or nonseparability of points of space (or spacetime) themselves and by then attempting to generate on that basis their effective separability (in other words the spacetime in which we live and in which, in turn, quantum systems partially unfold).
13 We have therefore shown that each of the three supposedly restrictive conditions on any variation can and even must be lifted – by relying on general relativity for the first condition and on quantum mechanics for the two others – in order to take into account the profound lessons from these two theories on the way in which things are produced, and how their variables and constituent states vary. Each time, the theory requires the abandonment of one of the constituent properties of any time variable: a variation may first be deprived of an invariant development; second, of a continuous development; third, of a successive development. Now when these three elements are taken away, the notion of time in the minimal but fundamental sense of a time variable loses all meaning: almost nothing remains. On the other hand, the notion of variation is still capable of maintaining its coherence.
14 So when the loop theorists try to set out the metaphysics corresponding to their theoretical breakthrough, which obliges them in particular to rethink time as it exists in the equations, sometimes they imagine a time that is at once dispensable / not measurable, discontinuous / fluctuating and superposed / decoherent. Quantum gravity thus leads to attributing properties to time that contradict its very definition, i.e. what it defines and that in turn defines it as such. The first of these properties is measurement, in contrast to a variable speed of variation or what should be called an indeterminate rhythm. The second is continuity, in contrast to discontinuity leading to the sudden emergence of an absolute contingency. The last is succession, in contrast to simultaneity, which therefore always tends to amount to a contradiction on a practical level. To escape this contradictory use of the notion of time, it would therefore be preferable and even necessary to abandon any reference, even an indirect one, to the notion of time and to directly conceptualize what is really at stake in this case: a variation without universally determined or determinable intervals, without necessarily continuous or deducible transitions, without successive or noncontradictory states, or states that are identical in themselves.
15
What conditions are required so that varying without invariance, or continuity, or succession can define not only the partial aspects of reality to which the best existing theories in physics attest, but also its overall aspect, for which the conception of its unity is the task of metaphysics? In order to make these first three properties compatible and thereby obtain a coherent vision of variation, two major theoretical obstacles must be removed: (1) believing that variation always takes place in something, (2) believing that it is always the variation of something. The whole purpose of our approach has been to remove the first obstacle: the idea that any variation takes place in a time (variation as change) and therefore also a spacetime (variation as translation) that underlies it. We have shown that it was not just possible but actually absolutely necessary to reverse the relation and to think that any spacetime produced and deployed itself within deeper variations that underlay it. But what of the second obstacle? By studying the first condition of any temporality, i.e. the subsistence of an invariant (the measurement or substratum of any variation), we have seen not just that the subsistence of a measurement was a Newtonian paradigm that was surpassed by Einstein’s twopart relativity, but moreover that the subsistence of a substratum was an Aristotelian paradigm that still had something to say to contemporary physics and metaphysics and was much harder to transcend. How in fact could a variation come into being and subsist without a substratum?
How could it avoid always being the variation of something that it does not itself vary (a particle, a field, a void, a symmetry group, etc.)?
16 A path of this kind is exactly what Bergson attempted to open, metaphysically speaking, in his text “The Perception of Change,” [15] by affirming the idea of a pure becoming with no substratum, of change as the one and only existing substance – which he calls duration. This concept of duration, however, does not allow for an appreciation of the radically new aspect of his perspective.
17 Let us examine two ways of approaching duration. In the first, it is seen as a new metaphysical version of Newtonian and Kantian time. In this case, the Bergsonian gesture is not only reduced to the fact of affirming the objective absoluteness of this same time, following Newton’s example and rejecting Kant’s, but also, contrary to the development of the revolution of relativity, it is reduced to the fact of affirming the radical preeminence of time over space. Here, time is not space, nor is it the time of anything: it is an absolute duration in itself.
18 In the second case, we understand – and Bergson himself invites us to do so by emphasizing this point repeatedly – that duration differs from abstract physicomathematical time by the fact that it is always the duration of something, a material becoming, a time that is experienced concretely (although not necessarily subjectively) each time. Here, however, the originality of the perspective refusing any unchanging substratum beneath the change is lost, and duration remains assigned to the entity that it is the duration of. Actually, this perspective can only appear in all its originality if one takes care, as we have done, to distinguish time from variation. (If we were to use the Bergsonian method against his own theory, we could say that duration is only a “bad mix” of the two.) Conceiving of a variation that would be unconditional as such, that would therefore not initially be a variation of something (but would be so only secondarily), leads to inverting the usual image according to which time does not belong to anything (as all things belong to it) whereas variation would always belong to something. What is true is precisely the opposite: variation is not entitled to anything (pure or unconditional variation) whereas time is always the time of something (proper time, material duration).
19 It remains to be determined, however, if this step toward a pure or unconditional variation, which would underlie all of its necessarily conditioned concrete manifestations, would not just be metaphysical. Can a variation of nothing, which would not involve any underlying subsistence or initial conditioning, really receive a physical meaning, i.e. a theoretical if not an experimental translation? Doesn’t that conflict with the very logic of any physical theory, for which any particular variation always constitutes the explicandum, while the set of general invariants, on which the variation relies for that explanation, is the explicans? We have seen two specific forms that these invariants take (ontological subsistence and universal measurement), but other general invariants are involved, such as the fundamental constants discovered by physics (c, h, G and perhaps NA as well, which serves as a basis for kB) or the laws that determine the relation between these constants and variable quantities, thus consisting of (differential) equations whose resolution then presupposes the choice of initial conditions, i.e. of initial values for the variables. But by wanting to extract and liberate variation from all of these forms of invariants, as fundamental as they may be, don’t we ultimately risk placing ourselves beyond the frontiers of any physics and even any possible science? And yet in our view, it seems as though the theories of quantum gravity will eventually be forced to discover and explore such a horizon. Something will be sought there that is both more fundamental than the smooth spatiotemporal container (which would only be a boundary case) and the quantum material as content (which would only be a secondary manifestation). Some theories still under development tend in various ways to identify this twofold “something” as a “fundamental degree of freedom,” in other words as a degree of freedom that is neither “of” something nor “in” something, but that manifests a “freedom in itself ” of which both the container and the content of our world could and should be a product, while not leaving any testable trace itself. [16] Doesn’t this mean that contemporary physics, taking a cue from contemporary metaphysics, increasingly tends to take as its sole starting point a pure or unconditional variation on the basis of which any invariant of any kind (spacetime, constants, laws, symmetry groups, fields, particles, etc.) must be explained, i.e. must be both produced and understood?
Notes

[1]
See in particular Bergson [1889], 8286 [110115].

[2]
Deleuze [1968], 223 [172].

[3]
Konopka et al. [2006].

[4]
Meillassoux [2006].

[5]
“Where would the film be housed? By hypothesis, each of the images, covering the screen by itself, fills up all of a perhaps infinite space, that of the universe. These images therefore really have no alternative but to exist successively; they cannot be given globally.” Bergson [1922], 157 [141].

[6]
McTaggart [1908].

[7]
“It is impossible for the same thing at the same time both to bein and not to bein the same thing in the same respect.” Aristotle, “Book Gamma,” The Metaphysics, trans. Hugh LawsonTancred (London / New York: Penguin, 1998), 88.

[8]
An excerpt from the “Gödel Papers” translated and quoted by CassouNoguès [2007], 47.

[9]
This superposition is in no sense a mathematical artifice describing the probability of possible but nonexistent states prior to any measurement, i.e. any supposedly concrete realization of a given variable. It is in fact a perfectly real superposition of states producing effects that can be tested. One example is the fringes in Thomas Young’s interference experiment when carried out at the quantum level – by emitting one quantum at a time – which does not in any way mean recognizing waveparticle duality: it means understanding the superposition of states of the variable or the position operator of the quantum in question. These effects from the superposed states will soon be usable for carrying out calculations: there is for example the case of qubits, or superposed states of 0 and 1, which may serve as a basis for future quantum computers.

[10]
It has in fact become possible to show that the state of superposition appeared or disappeared depending on whether a (nonsubjective) point of view – but one that was to some degree “external” or “internal” – had been adopted toward the system (and even within it, on the variable under consideration, whether it was conjugated or not, i.e. commutable or not with other variables, with the noncommutative variables defining the Heisenberg uncertainty relations mentioned earlier): see Bartlett et al. [2005]. This ties in with Carlo Rovelli’s generalized relationist interpretation of quantum mechanics [1997], which he recently reworked and expanded [2021]. This relationist interpretation has the great merit of having no subjectivist elements, of considering every measurement as just an objective interrelationship between two systems. But actually, as Rovelli rightly notes, this interrelationship always takes place according to a third point of view. In our opinion, this point of view seems capable of being internal or external to the first relation, thereby conferring as much reality to quantum superposition (external point of view) as to its projection onto an exclusive value (internal point of view, discussed in more detail below).

[11]
Thus the famous image of Schrödinger’s cat that is both dead and alive (or awake and asleep as Carlo Rovelli suggests, refusing to accept that an experiment, even just a thought experiment, would put a cat to death...) simply aims to demonstrate this violation of the principle of contradiction on a macroscopic level.

[12]
“A local beable”: in English in the text [translator’s note].

[13]
Between these two poles (unitary evolution or exclusive measurement) there is the process of decoherence, which has been experimentally proven and carefully measured and which describes the progressive loss of superposition, i.e. cross terms, therefore proper superpositions of results (among the possible superposed results) under the effect of the interaction of a system with its environment.

[14]
Connes et al. [2013], 70. The authorcharacter adds this assertion, which perfectly summarizes everything that we are trying to conceptually demonstrate here: “I think that quantum variability is more primary, more fundamental, than its temporal inscription and that we must reverse the hierarchy with the variability that originates in the flow of time” (Connes et al. [2013], 70). [The author appears as an “authorcharacter” as the book is written in the form of a crime novel – translator’s note]

[15]
Bergson [1934], 143176 [153186].

[16]
See Oriti [2021], in particular 23.