|Home page||Services||Past achievements||Contact||Site
|Page d'accueil||Services||Réalisations précédentes||Contact|
If we were to allow for the possibility of there being a time when the second law of thermodynamics did not hold, this might allow for Maxwell's Demon-like behaviour to have been possible, and hence for new energy to have been created out of nothing, at an exponentially increasing rate. This, of course, is exactly what appears to have happened in the first fraction of a second after the Big Bang. So, it is tempting to investigate if there could be a connection between the two ideas.
Unfortunately, this means that this exotic epoch must have occurred during a timeframe (5.39x10–44 seconds) where the currently known laws of physics do not hold, and where there is no prospect of the Maxwell's Demon-like behaviour being manifested in solid machines or devices (perpetual, or otherwise). On the plus side, this is precisely the timeframe where a theory that unifies General Relativity and Quantum Mechanics is sought (NS, 24-Sep-2016, p28).
There is a problem, of course, that if the presently known laws of physics cannot be used, what can? We do, at least, have one boundary condition as a guide: we know that whatever unknown laws of physics were applicable at the time, they must eventually collapse down to the known laws of physics in our epoch of the universe. Reassuringly, this further implies that the exotic regime was not perpetual at all, since the Maxwell's Demon-like behaviour would be using up some finite resource while it was creating the extra energy. This would serve to confirm the ascertion at the start of this web page, provided that the first and second laws of thermodynamics are generalised to include the finite resources of the initial exotic regime.
This table is explored, later, in the section on temperature and mass of the universe.
To fit in with what we observe, all the freshly generated matter/energy would have to have been generated in a highly ordered, low entropy state, with the balance between matter and energy at around 1:109, and hence with marginally more matter than antimatter (NS, 23-May-2015, p28; NS, 12-Apr-2008, p26), in the ratio 1000000001:1000000000. Paradoxically, this implies that there were nuclear reactions taking place (those that favour the production of slightly more matter than antimatter) which, in turn, implies a direction of increasing entropy.
There are several other very fine balances embedded in present in current theories. Several of these balances might collectively be explained by axions (NS, 14-Nov-2015, p36), whose hypothesised existence in the present epoch of the universe might be tested by searching for a particular type of Bose-Einstein Condensate star (NS, 12-Dec-2015, p11). Some astronomical objects, presently classified as black-holes, and some day analysable from their emissions (NS, 06-Oct-2007, p36), might turn out to be BEC stars (NS, 15-Jul-2017, p28). Alternatively, for the marginal imbalance of matter and antimatter, there are many experiments attempting to observe instances of neutrinoless double-beta decay (NS, 13-Feb-2016, p30), and the possibility of CP-violation having been observed in baryon reactions (NS, 11-Feb-2017, p14) with implications as to why time has a forward-flowing bias (NS, 22-Nov-2008, p32). Meanwhile, if particles with negative mass had been possible, they would have represented negative energy, allowing the overall sum for the entire universe to remain at zero. For the force of gravity, though, like charges attract and unlike charges repel. So, this might explain why we find no examples of particles with negative mass: they were all repelled over the de Sitter horizon, to another part of the universe (and likewise us, from their perspective).
The initial, low-entropy starting point for the universe is, at first, a puzzle (NS, 08-Oct-2011, p39; NS, 15-Oct-2005, p30). When the cosmic microwave background (CMB) radiation came into being, the universe was extremely uniform, which represents a low-entropy state since the now dominant force (gravity) works towards clumping matter. There must have been a point at which the universe switched from another force being dominant to gravity being dominant (NS, 12-Nov-2005, p27).
Up to 10-36 s after the Big Bang, the strong, weak and electromagnetic forces were indistinguishable (NS, 25-Nov-2017, p9), but the strong nuclear force then became distinct. At temperatures still over 100 GeV, which persisted until some 10-12 s later, the electromagnetic and the weak nuclear interaction were still unified, and the charged fermions massless. At lower temperatures, though, a distinct property of the Higgs field became apparent. Slightly paraphrasing an answer that was found on Quora: Normally, a field has the lowest energy when it is free of excitations; but with the Higgs field, when it is in its lowest energy state, some excitations are still present, leading to the so-called Mexican hat shaped curve (NS, 14-Nov-2015, p36). So even when starting from a vacuum with no excitations, it very quickly decays into this lower energy state, with some excitations present, and characterised by the vacuum expectation value (with echos of why there is inevitably something, rather than nothing). The universe is balanced in a state of unstable equilibrium between false vacuum and true vacuum (NS, 29-Oct-2016, p32). As a result, the symmetry is broken, and electromagnetism and the weak interaction become distinct: the particles mediating the weak interaction become very massive (making the interaction very short range, which we perceive as being weak), and charged fermions now interact with the vacuum expectation value, which we perceive as mass.
The symmetry-breaking event that turned on the Higgs field could also explain inflation, if the Mexican Hat curve has outlying alps (NS, 10-Jun-2017, p30). In any case, though, the Higgs field is another example of the early fine balances of the universe, in the way that particle masses, and in particular that of the Higgs boson, have almost cancelled themselves out, by a factor of 125:1019, in the so-called hierarchy problem (NS, 19-Jul-2008, p36).
The amount of normal matter seems to be dwarfed by the amount of dark matter, in a ratio of 1:4, and is another example of a fine balance that is needed to explain the orbital motions of galaxies, and their maintenance of a spiral structure for a significant number of cycles.
There is a suggestion that dark matter might be associated with a fifth fundamental force, a dark force, and it is this, rather than the Higgs field, that gives neutrinos their mass, and hence that this mass would be variable with the amount of dark matter in the region (NS, 24-Mar-2018, p16).
Modified Newtonian Dynamics (MOND) was originally proposed as an explanation that avoids needing to propose the existance of dark matter. A mechanism that switches from a Bose-Einstein Condensate state to an ordinary dark-matter state, depending on the strength of the gravity field, would allow a switching between MOND and normal inverse square law behaviour (NS, 02-Apr-2016, p30).
Several groups suggest that the speed of causality, c, is not a constant, but has only settled asymptotically at the value that we observe today. At the intense energy densities of the first split second after the Big Bang, the speed of causality could have been much higher (NS, 26-Nov-2016, p8), with a specific testable prediction about a measure called the spectral index for the CMB, which should be 0.96478.
The split between a quantum and a classical macroscopic universe could be thought of as a 'quantum death' of the universe (NS, 29-Mar-2014, p32), prior to which there was a lack of the speed-of-light limit on information transfer via entanglement.
The switch from exotic regime to normal regime might happen in response to the creation of so much new material and energy (from the proposed Maxwell's demon mechanism) that the universe bloats out, what we presently attribute to inflation or expansion and is pushed into a new regime of operation (no longer sub-planck scale, so no longer able to support faster-than-light transmission, and Maxwell's Demon behaviour).
Perhaps, even today, at sub-planck scales, faster-than-light communications between non-quantum particles still takes place (NS, 29-Jun-2002, p30). If only a minority of the particles generated in the Big Bang subsequently became quantum in nature, dark matter might be made up of the remaining particles; being non-quantum, and still capable of superluminal communication, these particles might be expected only weakly to interact with the quantum particles of ordinary matter.
There are still many theories as to what dark energy is (NS, 17-Feb-2007, p28), and the fine balance of the cosmological constant almost cancelling itself out, but not quite, being lower than that predicted by quantum mechanics by a factor of 1:10120. Interestingly, this means that this "anti-gravity" is weaker than expected, just as gravity is also weaker than the other three fundamental forces; some have suggested that this might be because it is the one that 'leaks' most readily into the extra dimensions of string theory (NS, 14-Mar-2009, p38).
Inflation was originally proposed as a solution to why the universe is now so uniform and flat (NS, 03-Mar-2007, p33; NS, 19-Oct-1993, p30) though it is not universally accepted (NS, 07-Jan-2008, p30). It is believed to have started at 10–35 s, when the universe had a size of 10–27 m, and ended at 10–32 s, when it had a size of 103 m. (According to this, vacuum energy can be considered as a sort of latent heat of phase change of some scalar field in a bubble universe.)
Even now, expansion causes energy to be created, and also its own acceleration, as a result of the second law of thermodynamics and the increasing of entropy (NS, 15-Apr-2017, p8) though there are some who question the inevitable increase (NS, 11-Apr-2009, p6). Inflation and expansion occur in regions where gravity is weak; consequently, a positive feedback loop causes expansion to accelerate exponentially as soon as it starts to weaken the gravity fields that were already present. Conversely, regions of space in galaxies, solar systems and planet surfaces experience no expansion.
If expansion had acted more uniformly, and been randomly distributed, we could have contemplated there being a sort of continuous out-going wind of space-time coordinates, as dark energy causes extra ones to be inserted. Atoms, molecules, solar systems, galaxies and local groups would be able to resist this wind, since much stronger forces (electromagnetic, strong nuclear, weak nuclear and gravity) would be present. If a new voxel of space-time happened to pop up in our galaxy (between the orbit of Mars and the Sun, for example, or between an electron and a proton in a hydrogen atom), it would be manifest as an injection of extra energy. The planets are already in a stable configuration, with stable orbits. If any space were to be inserted into any of these, their first reaction would be to collapse back down to their stable configuration. Thus, the expansion of space would be experienced as a miniscule increase in the amount of energy (albeit, so miniscule that it would be an undetectably slight increase in the probability of the emission of an extra photon). Expansion and inflation would have manifested themselves as the injection of new energy into the system, and over a certain spread of distance, it would appear as a force. Of the four forces, gravity is the longest ranging, but still becomes very weak at a distance: so unrelated clusters of galaxies are pushed apart by the expansion. This would further suggest that there would be a point at which G.m1.m2/r2 balances with the force of the coordinate wind.
Consistent with expansion (or inflation) leading to energy creation, Noether's theorem indicates that a lack of symmetry in time implies we cannot assume a conservation of energy. The first law of thermodynamics seemed to cease to be valid. So they had to root around to see if they could find a refinement to the context that could be made to work. The fact that they succeeded is remarkable; it does not usually work this way. Usually, no workable refinement can be found, so the concept is recognized as being dead in the water. It was a great new insight that the cosmic expansion could be considered, in the first law of thermodynamics, as being tantamount to a form of energy. Similarly, half a century earlier, when special relativity had done the same thing (energy could suddenly be created, from matter, or destroyed, by reverting to matter), leading to the great new insight that matter could be considered, in the first law of thermodynamics, as being a form of energy. One way out of this is to declare that the first law of thermodynamics remains valid, and conclude that expansion must be a type of energy; but, being completely unknown for the present, it acquires the title of dark energy. Similarly, there is a need for laws of quantum thermodynamics, since the statistical mechanics view of the second law of thermodynamics runs into problems in the quantum realsm (NS, 07-Apr-2018, p32).
Algorithmic information theory (NS, 11-Nov-2017, p28) and view of a participatory universe of interactive collaboration, might become a sort of quantum relativity, similar to the way that special relativity describes ruler- and stopwatch-wielding twins who are passing each other at close to the speed of light: both Wigner and his friend can be right, in their respective contexts. This suggests that there is no objective reality out there for us to observe (hence the problems we are encountering with quantum superposition and entanglement). Instead, what we think of as objective reality is just an emergent behaviour, and one that is so much more probable than the alternatives, that we all converge on agreeing about it. By analogy, they point out that the molecules in a gas can, in theory, take up any configuration that they like, but in practice tend to converge on a well distributed arrangement that can be summarised by its pressure, volume and temperature. Objective reality could be an emergent property of the mathematics that tends to converge on the simplest laws of physics. Questioning the existance of objective reality as a fundamental property impacts Bell's inequality, as does whether the numbers in his comparison are non-commutative (NS, 01-Nov-2007, p36), or should be handled as octonion and quaternion numbers (NS, 09-Nov-2002, p30).
Omnes (1990) notes that if the universe is to be treated as an information system, we must first establish its basis in logic. Heisenberg's uncertainty principle then drops out from Gödel's incompleteness theorem, or some equivalent (NS, 14-Aug-2010, p34): the logic that leads to a measurement of the momentum of a particle, contains no statements to describe its position, so any statements about its position can neither be proved nor disproved. Dirac proposed non-commuting quantities, or q-numbers, not all of which can be simultaneously number-valued (such as the eigenstate for an electron’s position, and one for its momentum). As noted earlier, Heisenberg's uncertainty principle follows as a consequence of information theory and the second law of thermodynamics (NS, 13-Oct-2012, p32; NS, 23-Jun-2012, p8).
Similarly, fractals might also be used to explain how the opposing views of quantum mechanics and relativity might both be correct (NS, 28-Mar-2009, p37) and how some Gödel-like questions about the universe (such as, "what if the experiment had measured the momentum of the particle instead of its position") might have no answer because they do not lie on the same fractal coastline of some sort of scale-relativity universe (NS, 10-Mar-2007, p30).
M-theory is an attempt at a unification of many of the flavours of string theory (NS, 19-Apr-2014, p47; NS, 28-Sep-2013, p34), perhaps complete with super-symmetry, known as SUSY (NS, 14-Nov-2009, p36), but with it all notoriously unfalsifiable (NS, 14-Jul-2007, p30). Bars proposes that adding yet an extra space and an extra time dimension to this, but constrained by the guage symmetries that give Heisenberg's uncertainty principle, leads to holographic principles that explain the connection between electron orbits round an atom with the expansion of the universe, and between quantum chromodynamics and the lack of evidence of anyons (NS, 13-Oct-2007, p36). There are others who are beginning to wonder whether it is a valid aim to find an all-embracing elegant theory, when perhaps there is none (NS, 03-Mar-2018, p30).
With string theory, the extra dimensions are assumed to be stunted, or curled up, into less than a planck length. With Braneworld, they are assumed to be fully-fledged dimensions, of which our 3+1 dimensions form just a membrane (NS, 29-Sep-2001, p26).
There are many completely different types of parallel worlds theory, multiverse theory, and many worlds theory (NS, 21-Jan-2017, p28). Most versions would have implications for free will (NS, 27-Sep-2014, p32). Smolin argues, though, that they are just devices for handling our lack of knowledge about the universe (NS, 17-Jan-2015, p24).
Tegmark presents a four-level classification (NS, 26-Nov-2011, p42).
Wiseman proposes a "Many Interacting Worlds" model (NS, 08-Nov-2014, p6), in which the behaviour of the quantum mechanical system is the blurred average behaviour from several universes (of the order of 41) that interact fairly strongly with each other.
Rovelli and Smolin proposed Loop Quantum Gravity (LQG) based on spin-networks (NS, 22-Jan-2005, p33). The subatomic particles could be caused by vibrations in the granuals of space-time at various modes.
Experimental tests are proposed that might determine whether space-time is quantised, and at what granularity (NS, 07-Mar-2015, p12; NS, 15-Aug-2009, p26). Others are proposed, to look for astronomical evidence of black-holes that collapse to the quantum loop size, and then rebound as a white hole at a characteristic frequency (NS, 02-Jan-2016, p32).
In effect, LQG replaces the notion of a top-down, external, all-encompassing framework of space-time coordinates with a bottom-up, nearest-neighbour, local interface between atomic granules of space-time. This implies that it is working something like a cellular automaton (NS, 21-Jun-2003, p32; NS, 06-Jul-2002, p46), with nearest-neighbour communications, working on simple, local rules whose amassed behaviour (summed over huge assemblages) would approximate to our familiar laws of physics. For example, a photon entering one granule on one side and exiting on another might be described by some operation, newcell=photonpassage(oldcell), and so the beam of light, and our Euclidean geometry, would end up as some averaged value obtained by integrating over lots of instances of:
There are indications (NS, 11-Mar-2017, p28) that LQG and string theory could be compatible, not least at the two-dimensional boundary of a holographic projection. It could be that the string length is somewhat bigger than the granual size, with string theory being supported on a fine mesh of these granuals, thereby explaining why string theory sees space-time coordinates as an external background (Smolin 2001).
Along with Causal Dynamical Triangulation (CDT), in which the granuals of space-time are tetrahedrally, nearest-neighbour connected in an (n-1)-dimensional topology (NS, 14-Jun-2014, p34), Quantum Einstein Gravity, Quantum Graphity and Internal Relativity (NS, 03-May-2008, p28) allow a connectivity that can vary between infinite (thereby doing away with the need for Inflation as an explanation), to 3+1 (thereby creating normal space-time) but reducing to 2 on small scales (perhaps hinting at a holographic principle).
Perhaps the unification of quantised, natural number algebra with continuous geometry could be a key (NS, 28-Apr-2018, p30). Markopoulou suggests that all of the fundamental particles might consist simply of qubit-like braids of space-time (NS, 12-Aug-2006, p28), which might then explain why the universe appears quantised (NS, 10-Nov-2001, p40), and hence the significance of knot-invarient properties (NS, 18-Oct-2008, p32). Entanglement, too, might be explained by topological properties of the fundamental particles (NS, 08-Jan-2011, p10). There is even a suggestion that the long-sought proof of the Riemann hypothesis, in mathematics, with the zeros of the zeta function for prime numbers all lying on the vertical line 0.5+n.i (NS, 22-Mar-2008, p40), might be found first in the states of a suitably chosen quantum system, such as an atom or molecule (NS, 11-Nov-2000, p32), with a possible connection, too, with the Schanuel conjecture (NS, 21-Jul-2007, p38). Random-matrix theory might be used as a tool (NS, 10-Apr-2010, p28), as might the same sort of deep-learning program that was applied to playing Go (NS, 28-Oct-2017, p36; NS, 18-Feb-2017, p12).
Using amplituhedra as a sort of multidimensional version of Feynman diagrams (NS, 29-Jul-2017, p28), giving the results of calculations in quantum chromodynamics equal to the multidimensional polyhedron's volume. Not only does the method generate tractable and correct results, but it suggests that 'locality' is an emergent feature. Unfortunately, at present, the tool only works for super-symmetric quantum mechanics.
Both relativity and quantum uncertainty agree that there is no such thing as simultaneous. However, they tend to split in two camps: the block view of eternalism versus presentism (NS, 03-Jun-2017, p44).
One possibility as to why the universe started off in such a low entropy state, for example, is that, to us, looking from inside the universe, the moment of low entropy state simply appears to be the start of the universe, and of local time (even if it wasn't) as a direct consequence of our own definition. Just like the characters in a movie film, we cannot tell if the film is being run backwards in the projector. Moreover, the actors can rehearse, and even do the takes of each of the scenes, in any arbitrary order, stitching the characters' chains of thought together mentally, in their minds, and physically, in the cutting room.
Arguing in the static, such as using Symplectic Integration (NS, 19-Mar-1994). For-all array-operators (APL, cluster states).
It is suggested that time must be relative, rather than absolute (NS, 08-Oct-2011, p37). Indeed, there is a question as to whether time or space, or both, are derived, or emergent properties of something more fundamental, such as is offered by LQG (NS, 15-Jun-2013, p34), and as to why there are three dimensions for space (NS, 28-Sep-2013, p34).
Any theory that attempts to explain the manner of the arrow of time 'arising', entropy 'growing', and the 'period' 'when' all this did so (NS, 16-Jan-2016, p8) is, on the face of it, using self-contradictory time-dependent terminology. It implies some hypothetical absolute time, outside our universe, that is distinct from, albeit parent to, the local time that we experience within our universe. All such concepts therefore make some direct or indirect reference to dtlocal/dtabs.
Smolin points out (NS, 23-Sep-2006, p30) that, with relativity, by abstracting out time as just being another dimension, it means that all the physical laws become constant, invariant, and outside of time (NS, 20-Apr-2013, p30), which is strange for a universe that has existed for only a finite time (NS, 22-Nov-2008, p32). Moreover, such a view leaves us with no way of explaining why we have the concept of 'now', and indeed of past, present and future (NS, 21-Apr-2018, p28). Rovelli argues for the emergence to have occurred via thermal-time, in which it is not reality that has a time flow, but our approximate, lumped-parameter statistical knowledge knowledge of it that has the effect of a time flow (NS, 19-Jan-2008, p26).
It seems that time is brought into existence in some concocted, internal view (NS, 08-Oct-2011, p41).
Entropy puts a direction on the direction of transfer of energy, and Fourier's, Newton's and Stefan's laws of cooling start to put numbers on the throughput (and how quickly a system out of equilibrium can arrive at a new state of equilibrium). Meanwhile, the velocity of light limitation of special relativity puts numbers on the latency of that energy transfer.
According to the "electromagnetic arrow of time" view, it is the speed of light (speed of causality) that is nature's way of making sure that things do not all happen at once (NS, 04-Mar-2017, p28). With a movie film of a child passively sitting on a swinging swing, there is the finite speed at which cause is followed by effect, and at which various waves travel (as for example, when visual clues come before the audio ones).
As a result of Laplace's aberration, stars orbiting round the centres of their galaxies, and galaxies orbiting round their local groups, do not have the same view of gravity as spacecraft going up to service the ISS.
Even the Newtonian laws of motion encounter problems, when taking causality into account. When a decision is made to throw a ball, or to pull a car away from a set of traffic lights, the whole set of functions (x(t), dx/dt, d2x/dt2, d3x/dt3, ..., dnx/dtn) all have to be continuous, smooth and zero at t=0; a property held by fkat functions, as opposed to the more conventional analytical functions. Normally, the universe avoids this by allowing all the changes to be blurred, allowing a continuous ramping up of each of them, with no step changes. Interplanetary transport networks (NS, 25-Mar-2006), but applied to all fields, not just gravity, might be related, inasmuch that it allows situations where all the derivatives of each of the alternative paths can become equal (indeed, zero) at the saddle points.
Under the "thermodynamics arrow of time" view, it is the increase in entropy that gives our real sense of the direction for the flow of time.
If the child on the swing spills a box of matches, each match is completely unchanged, and could have its trajectory reversed. Similarly for gas molecules in a poorly confined volume.
The eternalism (block universe) view is tantamount to considering time to be just another spatial dimension, w, and hence as one of four dimensions (w,x,y,z), like a reel of celluloid movie film that has been cut up into its individual frames, with all the frames piled up in sequence. The frames represent just two spatial dimensions, x and z, with the position in the pile, w, used to represent time (with the third spatial dimension, y, implied by the use of perspective within the frame). We can contemplate how the positions and shapes of objects change in this pile of cine frames. We could trace the movement of a sugar cube as it first enters into the scene in a sugar bowl, then is lifted and dropped into a cup of tea. Ultimately, though, the beginnings and ends of any object only make sense if we are tracing the paths of the constituent fundamental particles (which are not only atomic, but also dimensionless). Macro objects, such as sugar cubes, are then just akin to patterns in shoals of fish or murmerations of birds.
There are constraints on the shapes that can exist in the static space of (w,x,y,z). Stable patterns within (x,y,z) tend to extend further in w than unstable ones. When computing the entropy within any closed region in (x,y,z), it will be a function that is monotonically dependent on its extent in w. For instance, molecules of gas confined to one of two adjacent chambers on the x-axis, when the partition is suddenly removed, must occupy greater ranges of x values for increasing values of w. However, it is not a linear relationship, and instead must reach the x values asymptotically. Ultimately, all particles in the universe have asymptotic values back towards the Big Bang, forward to the current value of w, forward to the end of the universe, and back again towards the current value of w. This gives two distinct periods: the past from the Big Bang to the current asymtotoic value, and the future from the current asymptotic value to the end of the universe, and hence a concept of 'now' at the interface.
A human would appear as a sort of "worm" in the block universe, with the baby at one end, and the corpse at the other (NS, 02-Nov-2013, p34) whose thoughts would be manifest as positional relationships in the static structure (NS, 22-Nov-2008, p32) in an analogous way structures within a painting.
Perhaps an extra dimension could be involved, perhaps a stunted one that only has two possible states: 0 or 1 (past or future); or, perhaps this Boolean information could be encoded in the states of the six curled up extra dimensions that string theory hypothesises (thus once again making the model a static one in ten dimensions). The concept of 'now' could be simply the contents of our three spatial dimensions (complete with their measure of entropy) at their point along the time dimension where these extra dimensions undergo the sort of phase change between past and future that Ellis' proposal seems to imply.
Perhaps the past and future are different phases of the universe, and that 'now' is a wave front of phase change, as it traverses the spatial dimensions of the universe, and that we, by definition, live out our experiences entirely on this wave-front. Even in digital electronic computing, the program counter of the conventional processor is centred on the concept of 'now'.
Dowker proposes, using causal set theory, that the flow of time is caused by the expansion of the universe, and a measure of the new qubits that are thereby being added (NS, 04-Oct-2003, p36). This new space is presently being created, at the rate of 69.3 km/s for every 3.26 million light years, which equates to 2.25x10-18 m/s per m, which is a 1/800th of the diameter of a proton per second per metre, or 1.39x1017 planck lengths per second per metre.
Muller (2016) proposes that as space-time expands, it is the new coordinates of time that feel like 'now', and give rise to the feeling of the flow of time. 'Now' is simply the raw face at which new instances of time are being created as space-time expands, rather than some notion of the increase in microstates (caused by the expansion of space-time) for entropy to expand into. There is then just the problem of defining what it means for new instances of time to be added; to be added implies that there is change, but change with respect to what? Moreover, none of this would not be applicable if expansion is not happening in our part of space-time.
From our perspective, time appears to be influenced by (and ceases to have its familiar properties) in the presence of intense gravitational fields, though it is more accurate to say that it is the other way round, and that affected time is what we perceive as gravity.
In loop quantum gravity, when the universe is compressed, the speed of light takes on an imaginary value, and time becomes a fourth space dimension; and hence suggests that the emergence of time was caused by the breaking of this symmetry.
In any case, it can be noted that crossing event-horizons is a monotonic process.
Gurzadyan suggests that the arrow of time might be a simple consequence of the curvature of space, and that this would avoid the need to postulate a period of inflation; unfortunately, though, this would only work if the curvature of space-time were negative (NS, 15-Oct-2005, p30). Susskind also suggests that a universe that had to tunnel through the string-theory landscape would also lead to it having a negative curvature (NS, 02-May-2009, p35).
A local negative curvature could indeed be what has resulted from the large voids that have formed (NS, 15-Nov-2008, p32), due to clumping of the galaxies over the most recent 5 billion years (NS, 24-Nov-2007, p34) as a back-reaction to space telling matter how to move, and matter telling space how to curve, and that we live within such a void (NS, 18-Jun-2016, p28; NS, 08-Mar-2008, p32). Importantly, this might explain the results that are currently attributed to dark energy, thereby avoiding the need to propose the existence of such a thing. Indeed, the problem with ascertaining the actual curvature is that it becomes a chicken-and-egg problem with ascertaining the expansion (NS, 01-Aug-2009, p40). There is also a possibility that mass could distort time and space differently (NS, 24-Oct-2009, p8).
Not just the curvature of space-time, but the acceleration of expansion, and also the maximum speed of causality should all be possible to derive mathematically, complete with its own non-communitive properties, without reference to light, mass or energy (NS, 01-Nov-2008, p28).
Maldacena's Anti-de-Sitter/conformal field theory (AdS/CFT) correspondence, which permits a hologram-like conversion between a five-dimensional string-theory space with gravity and a simpler four-dimensional space (NS, 12-Oct-2013, p36), also would only work if the curvature of space-time were negative (NS, 30-May-2009, p34).
Bekenstein showed that the surface area of the event-horizon of a black-hole corresponds to the zeroth law of thermodynamics. Taking such a view, though, leads to a problem with the disappearance of information when matter enters the black-hole, and its subsequent reappearance as Hawking radiation (NS, 04-Feb-2017, p16; NS, 27-Jul-2013, p10), and hence with it still being available to forensic-science investigators at the event-horizon (NS, 19-Sep-2015, p11). So, the event-horizon would be too big to hold all the information, since the information is disappearing from the outside viewer, leading to implications of there being a so-called firewall at the event-horizon, (in contravention of relativity, which holds that there should be no detectable landmarks at that part of the curvature of space-time) or an increased speed of light inside the black-hole (NS, 06-Apr-2013, p38). The many-worlds interpretation could resolve the firewall paradox since it is in the many-worlds that the information content is conserved, not in the individual branches (NS, 06-Jan-2018, p14). Even without this, though, the answer might lie in the entropy of the Hawkin radiation (NS, 10-Feb-2018, p15) according to a Generalised Uncertainty Principle (GUP).
Another attempt to resolve the firewall paradox is to propose that the whole universe is a hologram (NS, 17-Jan-2009, p24; NS, 27-Apr-2002, p22), and that reality is somehow lived out, in time, distributed round a two dimensional surface. Such possibilities as the universe being holographic might be detectable, via the consequent violation of Lorentz symmetry, if space-time turns out to have a preferred 'weave' in specific directions (NS, 16-Aug-2003, p22).
According to the Copenhagen interpretation, the interaction with an observer causes the wave-function to collapse, and that therefore the observer is an integral part of the experiment. This leads to many disconcerting possibilities, not least the existentialist one that reality is not objectively present. Furthermore, the use of the word 'observer' raises the possibility that conscious beings have to be involved (NS, 02-May-2015, p33), which is not only disconcerting, but problematic, since we still do not have a definition of consciousness, even now, after several millennia of investigation (NS, 04-May-1996, p20).
In asking what reality is, either we define it in terms of objective things, stuff that can be sensed and known to be out there, or we can try a reductionist approach of trying to build everything on some lowest level (NS, 29-Sep-2012, p34); but both approaches have their problems. Both approaches end up going round in a circle, with macroscopic objects (not least human brains) at the end: in the former approach, consciousness is the property that causes wave-functions to collapse one way or the other, and not remain in superposition; while in the latter approach, reality is based on a substrate of mathematics (NS, 15-Sep-2007, p38; NS, 21-Nov-1992, p36) and Tegmark's 4th level of multiverse, and the debate as to whether mathematics is invented or discovered (NS, 02-Sep-2017, p30). It seems there is a known, bounded periodic table of all the symmetries that are possible in mathematics (NS, 14-Jun-2008, p38). Within this, Lisi suggests how the sub-atomic particles can be mapped on to a 248-vertex E8 pattern in 8D space (NS, 17-Nov-2007, p8).
Wootters proposes a physical manifestation of what we presently handle in mathematical models of quantum mechanics using the square-root of minus one, to account for the loss of information at wave-function collapse (NS, 25-Jan-2014, p32).
Dark energy might be explained by the creation of information by wave-function collapse (Sudarsky, Josset and Perez). Decreasing information implies increasing entropy. The arrow of time might then be due to the particles of the universe becoming ever more entangled (NS, 04-Feb-2017, p31).
Entanglement in time (NS, 27-Aug-2016, p12) consists of a particle becoming entangled with an earlier version of itself (NS, 27-Mar-2004, p32).
Experiments have shown the so-called quantum pigeon-hole effect (NS, 02-Aug-2014, p8). Experiments have also shown the so-called quantum Cheshire-cat phenomenon: that a particle can follow one path, and its properties (such as spin) can be split off to follow a different path (NS, 26-Jul-2014, p32).
Decoherence is the overwealming of wave interferences that had been in place; while wave-function collapse is the increase of one of the probabilities to one, and the dropping of all others to zero. Causality ceases to apply when there is entanglement (NS, 03-Aug-2013, p32), and decoherence occurs when some of the information leaks out from a system that is in superposition. Fields (and hence, particles) are in states, and it is states that can be entangled. Decoherence is the gradual blurring, with time, of a quantum system's boundaries as its poor isolation from its environment cause other superposition states to form with the environment's states. Wave-function collapse is the conversion of the several eigenstates, of a quantum system in superposition, to a single eigenstate, with the triggering event said to constitute an observation, or measurement.
Gell-Mann and Hartle (1990) talk of "information gathering and utilising systems" (IGUS), with possible implications on free will (p454; NS, 01-May-2014, p34) and how it is not the observer (or the act of observation) that creates reality (p453). Multiplication by the off-diagonal zeros, implicit too in the And and Or operations of manipulating proabilities (p441), is an irreversible operation (p452). It comes down to how (a+b)2≠a2+b2 (p428), and how, under superposition, the probabilities might not add up to unity. All of the universe is interconnected, by the off-diagonal elements (p431) and all laws in physics are approximations (p445) but the authors seem to be against the notion of loop quantum gravity (p430). History requires knowledge of both present data and the initial condition of the universe (p439) and results in the notion of time (p437) via its ordering. This leads to the mechanism of Heisenberg's uncertainty principle (p455) and how the many worlds view should really be thought of, instead, as many histories. There is a chain of logic leading from wave-function collapse, to decoherence, to resonance, to Dalton chemistry, and to a mechanism of survival of the fittest (p449). Patterns will crystallise out, but, as with evolution, because of the randomness in the process to get there, we cannot predict, in advance, which ones.
Each collision nudges the matrices to be in nearly diagonal form; nearly, but never completely. Paul Davis notes the non-locality of quantum mechanics (with all the universe having some effect on all the rest). He also notes that an accelerating or rotating body should experience the glow of the quantum vacuum (NS, 03-Nov-2001, p30), perhaps contributing to an explanation for the apparent correspondence between inertial mass, gravitational mass, and the other three fundamental forces (NS, 03-Feb-2001, p22).
Electrons, Up quarks, and Down quarks, have no internal structure, and can be substituted for one another without any observer noticing. Although protons and neutrons are made of a mixture of Up quarks and Down quarks, with various permutations of red, green, blue possible, confinement means these internal data are transparent to our experiments (NS, 04-Dec-1993, p28). Next, with Bucky balls, it seems that C60 or C70 molecules still have no discernible landmarks (if one carbon atom were to be substituted by another, for example). For strands of DNA, though, it is not so transparent; discernible landmarks, and hence an entropy to its structure, and hence a memory. Strands of DNA could be configured to remember which slit of the experiment they had passed through, so should always behave like particles, and never like waves. It must somehow be manifest to the apparatus of the double-slit experiment, some non-linearity or lack of symmetry (though some sort of Feynman-diagram integration operation). The difference between sending a Buckminster fullerene molecule or a small virus through a double-slit experiment (NS, 17-Mar-2007, p36; NS, 15-May-2004, p30) is that the latter has more of an odometer or local clock, with increasing local entropy, as its DNA sequence becomes degraded (by methylation, telomere damage, or simple non error-corrected base-pair swapping). No-one has yet obtained interference fringes by firing small virus DNA molecules through a double-slit experiment (NS, 09-Mar-2002, p26).
Objective collapse occurs on its own, with no observer required.
A black-hole swallowing of information can be explained, with collapsing of wave-functions inside the event-horizon needing no observer. Perhaps the ripples of passing gravitational waves affect macroscale-sized objects, but have little affect on subatomic scale ones (NS, 21-Nov-2009, p12). When a massive particle is in a superposition between two places, it must have implications for the curvature of space at those two locations (NS, 03-Jan-2015, p26) Gravity might be the effect of objective wave-function collapse (NS, 23-Sep-2017, p8). Penrose suggests (NS, 09-Mar-2002, p26) that it is not a coincidence firstly that it is when the effects of gravity start to become noticeable that the Standard Model starts to break down as a workable approximation of how the universe works, nor secondly why gravity is the one force that resists being unified with the other three. Indeed, it could even be because of gravity that quantum entanglement experiments are so difficult to perform on Earth (NS, 20-Jun-2015, p8). The quantum mechanical uncertainty of the energy of a particular region of space, with a given uncertainty of time over which it persists, would translate, relativistically, into an uncertainty of the curvature of that region of space, and it is perhaps this non-linearity that collapses any wave-functions that are in a state of superposition.
The ability to add a dimension is what allows the AdS/CFT duality to add gravity to the standard model (NS, 11-Feb-2017, p24). Tegmark argues that gravity plays no part in wave-function collapse, since gravity is only an optional interpretation on one side of the AdS/CFT duality, and that gravity can be taken out of the equation (albeit not in our universe but a hypothetical one with negative curvature). It might still resolve the EGB (spooky action at a distance) paradox. The paradox of quantum monogamy (no more than two particles can be in the same entangled state at the same time) between three particles (one on each side of a black-hole's event-horizon, and the other at the other end of a wormhole) can be resolved by noting that the one at the other end of the wormhole is in the future, or past, of the other two. The particle at the other end of the wormhole is never at the same time as either of the other two, though it might be at the same space as one of them.
The resolution of the search for the 3D Ising model, via a bootstrapping approach, could solve many problems concerned with phase transitions, many-body strongly coupled systems, superconductivity, and quantum mechanisms of the strong nuclear force and the AdS/CFT duality (NS, 18-Feb-2017, p28).
Spontaneous wave-function collapse is even stronger form of objective collapse (NS, 16-Jul-2016, p30), occurring unprovoked, perhaps with some characteristic half-life. If the spontaneous collapse of the wave-function of one particle then cascades to the collapse of any others nearby, this would explain why macro objects, like cats and orbiting planets, are not in a superposition of states. Perhaps the property that stops particles that possess memory from showing quantum behaviour in the double-slit experiment could be connected. Spontaneous collapse was first proposed by Pearle, Ghirardi, Weber, Rimini in the 1970s, and made compatible with general relativity by Bedingham and Tusmulka. Indeed, it could be used to form the bridge between quantum behaviour and relativity (NS, 14-Apr-2018, p34).
There is a distinction between things that have memory (like human brains and biological genomes), but also including objects that merely persist (like rocks and rivers), and those that do not (such as momentum, force, energy and power, that are functions of the present state of the objects concerned, not of their pasts). Memory is key to an object's identity, whether it be a particle passing through a double-slit experiment, a human who thinks (and therefore who is), or the genome of a species. Also, it is the act of erasing a memory bit that incurs the thermodynamics cost.
Back in the 1990s, it was found necessary, for security reasons, to lock all university PC hard-drives to be read-only. Students would carry their work around with them on floppy disks. The floppy disk could also carry other, meta, information, like the user profile. The students were able to work on one PC, advance their work a bit, save it to floppy disk, go off to a couple of lectures, then go up to a different PC, and resume their work, with their own personalised profile, on a completely different PC, as if nothing had happened inbetween. This demonstrates how memory is the key to identity. The students' computing identity did not reside on the PC (worth several hundreds of pounds) but on the floppy disk that they carried around with them (worth a few pence). Similarly, my identity with the person who ate breakfast this morning is firm because of the memory of eating that breakfast. Conversely, it does not matter if someone claims that I am the reincarnation of some medieval knight; if I cannot remember anything of my past experiences (NS, 22-Apr-2017, p28), it is not much of a reincarnation.
Time-travel would only be possible if eternalism is right, and presentism is wrong (NS, 03-Jun-2017, p44). There are many paradoxes surrounding time-travel, and many attempts at proposing resolutions to them (NS, 28-Mar-1992, p23) assuming time-travel to be possible (NS, 20-Sep-2003, p28) and noting that relativity is somewhat ambivalent on the possibility (NS, 20-May-2006, p34). Hawking proposes a chronological protection conjecture (NS, 08-Oct-2011, p50), and some argue that the second law of thermodynamics could give the answer (time-travel is possible, but must cost the time machine at least as much energy to run as that needed to achieve the entropy change that the time-travel brings about).
Consider two hypothetical time-travellers arguing over whose time machine can go faster than the other, measured in centuries per second. Whose seconds would they be talking about? Intuitively, one would expect those seconds to be some sort of perceived seconds, within the passenger compartment of the respective machines, perhaps as indicated on their respective wrist-watches, or rather, by their various body clocks, and thence by some measure of local entropy increase, numbers of bits being erased, and hence of memories being recorded, and of histories being laid down. A time-traveller, going to the future and back would be gaining knowledge and experience, and clocking up seconds on the odometer, just as a car's kilometerage keeps clocking upwards despite never straying from the daily commute.
There are questions as to what it is that gives the feeling of the passage of time (NS, 04-Feb-2017, p31). Ellis notes that none of the present physical models capture our feeling that time flows (NS, 02-Nov-2013, p34), and Gisin adds that none of them capture our feeling of having free will (NS, 21-May-2016, p32).
The ratchet of the second law of thermodynamics; monotonicity and monotonic functions. Things align with df/dw>0, which we think implies dS/dw>0.
The notion of time flowing does still beg the question, of course, of flowing with respect to what? In the case of thought experiments involving astronauts passing their twins at close to the speed of light, or involving a trip close to the event-horizon of a black-hole, the rate of flow is comparative, between two human observers. However, we also use the term about our own, everyday sensation of the passage of time. One argument is that it is our sense of identity, that the person who started reading this sentence is the same as the person who ate breakfast this morning: one thing that we could be measuring the flow of time against is our laying down a permanent and ever-growing trail of past memories. It is not unlike the illusion created by our image in the mirror being based in left-right symmetry, and not up-down symmetry. It is further supported by a proposal (NS, 15-Aug-2015, p26) that consciousness is just an illusion that the subconscious brain concocts (NS, 07-Jul-2007, p36), compatibly with Libet's experimental observations, as an aid for the survival of the species (thereby also explaining phenomena such as pantheism, phantom limbs and post-event rationalisation of otherwise odd behaviour). Perhaps free will just means deterministic behaviour, but dominated by internal interactions within the system (along the lines of Tonino's integrated information theory).
The table that was introduced in the section on the first split second represents the merged data from several articles (NS, 03-Aug-2002, p28; 05-Jul-2008, p28) plus a few other data points (NS, 25-Nov-2017, p9, NS, 03-Mar-2018, p9), but not taking into account a different way of estimaing the lifetime of the universe based on the mass of the Higgs boson (NS, 24-Mar-2018, p8). Using a bit of reverse engineering, this seems to indicate that:
which can be rearranged to give values of Tn as a function of tn, or more generally T(t):
where the suffices stand for 'beginning', 'end' and 'now', and:
tb = √(G.ℏ/c5) = 5.39x10–44 s
Tb = √(ℏ.c5/G.k2) = 1.42x1032 K
Te = (ℏ.c3)/(8π.G.k.mU)
te = 1/Te2
tn = 1/KHubble = 13.8x109 years
Tn = 2.7281 K
The mass of the observable universe has been determined at mU=1053 kg (NS, 16-Dec-2000, p26). Since its present temperature, Tn, is greater than its Hawking temperature, Te, this means that it is out of equilibrium, and will continue to cool. The question addressed in the 03-Aug-2002 New Scientist article is, "at what time will this equilibrium state be reached?" The answer reported seems to have assumed, perhaps only implicitly, mU=7x1051 kg. The figures above, though, indicate mU=1.58x1053 kg, and hence to te=5.3x1052 years.
Following on from a 1979 paper by Freeman Dyson (NS, 03-Aug-2002, p28) it would at first appear that machines in general (and living cells, and thinking brains, in particular) can continue to eek out a slower and slower existence, forever, right into the heat death of the universe, as the amount of temperature variation gets smoothed out. However, the universe is expanding, with potential supplies of energy constantly going over the de Sitter horizon, out of reach of future generations (NS, 20-Oct-2001, p36). The monotonic disappearance of potential resources over the de Sitter horizon is shown to be a manifestation of the second law of thermodynamics (NS, 15-Apr-2017, p8). However, expansion equates to the injection of new matter. Meanwhile, the machine of Freeman Dyson will get warm, and will need to dissipate heat as it works, so the Hawking temperature imposes new constraints on the duty cycle of the machine.
So, the measure is tied up with that of measuring divergence (or convergence) from the given point, or conservation of the given quantity, and with the '∇⋅' operator. It is suggested that the conservation of charge corresponds to the fact that there is not such thing as an absolute 'ground' voltage, and so all voltages are relative, and hence constitute a symmetry. Thus Kirchhoff's two laws, and their continuous equivalents, constitute the usual pair of symmetry and conserved quantity. Kirchhoff's current law is a discrete case example of '∇⋅'. When generalised to a block of metal conductor, as opposed to discrete circuit connections, the voltage law involves the '∮' of the gradient. In geography, this would be tracking the height above sea level, as the walker walks a closed path that crosses many contours. The symmetric quantity is the sea level that is being used as the reference point all the way round, calculating the height relative to this based on the gradient at each step. This appears to be approaching a representation in Noether's theorem; in the case of entropy, though, this would leave us with ΔS.Δ(T.t)>=ℏ/2. (where T2.t of the universe started as a constant, but now is falling). Indeed, Rovelli suggests that time, like temperature and entropy, might be an artefact of our way of bulk parameters to allow us to economise on the description of a group of particles (NS, 29-Jan-2008, p28). Certainly, the "(ln(t)-ln(tb))/(ln(te)-ln(tb))" expression, earlier in this section, is suggestive of a type of third law of thermodynamics. Moreover, it is interesting that this brings in close juxtaposition, two laws (Charles' Law and Hubble's Law) that have similar extrapolative power.
If IU(t)∝(mU)2.t2 and if T2.t was roughly constant in the early universe, this suggests that IU(t)∝(mU)2/T4 in the early universe, suggestive perhaps of a connection to Stefan's law.
The first step when encountering a new phenomenon is to start taking measurements; the second is to look for underlying relationships between the parameters; and the third is to propose physical mechanisms that might generate those relationships. Despite a century of intense study, quantum mechanics appears to be still stuck at the second stage, with phenomena characterised by probability functions, extremely successfully, but without any convincing explanation as to the origin of the values that are so produced (NS, 28-Jul-2012, p28), and with a bewildering zoo of potential interpretations (NS, 22-Jan-2011, p30). The most obvious Achilles heel of quantum mechanics is our lack of explanation for why the Born rule for probability amplitudes is valid (NS, 05-Nov-2016, p8) and the need to use renormalisation to remove the infinities. Wave-functions are either a manifestation of some (ontological) underlying mechanism (possibly right down to the It-from-Bit view), or they are merely human tools or (epistemic) abstractions that guide us to the correct answers (NS, 01-Apr-2017, p41). Recent experiments rule out many classes of the latter view (NS, 07-Feb-2015, p14), and many people believe that there must be a deeper, underlying mechanism that has yet to be discovered (NS, 23-Jun-2007, p30), even though the current proposals are mutually conflicting (NS, 14-Nov-2015, p14). With Quantum Bayesianism (QBism), the wave-function is merely a summary, constructed by the human observer, of all the observer's knowledge of the system, and hence that it is just in the observer's mind and not a property of the quantum particle itself (NS, 10-May-2014, p32). A more fundamental principle, underlying quantum mechanics, might involve abandoning the principles of causality and/or conservation of information (NS, 16-Jun-2018, p28).
Various attempts have been made to explain how the probabilistic behaviour we observe in quantum mechanics could arise from an underlying deterministic behaviour, including a refinement of the pilot-wave idea (NS, 22-Mar-2008, p28) and a new way of viewing the Bohm interpretation (NS, 27-Feb-2016, p8), and how it might even be supported by an experiment on classical waves in an oil bath (NS, 08-Apr-2017, p28).
Bell's theorem tells us that, if the results of quantum mechanics experiments are valid, our problems and objections are either with the 'at a distance' (relativity), the 'action' (objective reality) or the 'spooky' (ghost in the machine, free will, consciousness) implications of entanglement (NS, 26-Feb-2011, p36; NS, 03-Aug-2013, p32). For the third of these, it might not necessarily be the observer's free will that is in question, but an inherent limit on our ability to close all the loopholes in our freedom to perform independent experiments (NS, 18-Jun-2005, p32), though experiments have subsequently been performed that significantly tighten up on these loopholes (NS, 05-Sep-2015, p8), including ruling out the possibility that the measurement of the state of one of the particles could tamper with the mechanism of the random number generator (NS, 11-Feb-2017, p7). Experiments have been run to see if the Bell Inequality is affected by whether conscious human minds are involved in the decision as to which parameter to measure in each entangled pair of particles (NS, 27-May-2017, p7). Others questiion the validity of Bell's theorem (NS, 03-Nov-2007, p36), and propose deterministic pre-quantum mechanics theories, for example using hidden variables, or non-commutative parameters.
The temporal version of the Bell inequality confirm that, given a particle's initial state and final state, it is not clear that it had any definite history of intermediate states (NS, 04-Dec-1993, p14), as born out in Feymann's diagrams and Gell-mann's many-histories.
Maybe the non-locality aspect of entanglement can be resolved by retro-causality (NS, 17-Feb-2018, p28) since even if a future choice affects an unknowable measurement now (such as a particle's position or momentum) it cannot be used to communicate from the future to the past.
Experiments consistently confirm that a particle cannot be pin-pointed, at sub-atomic scales, in 6-dimensional phase-space (x,y,z,ẋ,ẏ,ż) because the particle does not have a specific location in that space, with any attempt to locate it, resulting in its position being a blurred one. Unfortunately, all our current methods for solving the equations of motion, from Newton, Lagrange, Hamilton and Jacobi, have, at their roots, the concept of this phase-space, so our ability to use these tools to work out the movements of particles at the sub-atomic scale is somewhat hampered. (In effect, any attempt at using dead-reckoning to predict a particle's future or intermediate position is doomed to failure.)
Using phase-space as our tool of preference involves imagining everything in terms of momentum-energy, instead of space-time, with curvature in that space leading to the concept of relative locality (NS, 06-Aug-2011, p34).
Noether's theorem agrees that the two parameters in Heisenberg's Uncertainty Principle are connected (the conservation law of one follows as a consequence of the symmetry exhibied by the other) and hence that the two parameters are just two sides of the same coin, so are described by one shared set of information, not two. Heisenberg, Noether, Fourier and Bell all point in the same direction, that it is not just that we do not have access to all the information, but that the information simply does not exist in the first place (for example of a particle's position and of its momentum), and that our observations are doomed forever to being probalistic (NS, 14-Mar-2015, p28). Quantum nature is perhaps just a manifestation of economy of information (NS, 15-Nov-2014, p28); if a given effect follows from a given cause 80% of the time, it would be inefficient for the system to encode the given cause with 100% coverage.
Perhaps this shared information of "two sides of the same coin" arises simply whenever we try to find partial information about an entangled system, and that quantum weirdness simply emerges from more logical central principles (NS, 11-Apr-2015, p34).
Perhaps the mathematical model of the low entropy start of space-time can be transformed, for example using Noether's theorem, to a symmetrical model in some other mathematical space (Derek Potter on Quora).
Wheeler's "It from Bit" (Zurek 1990) proposes that all the matter of the universe is made of information. Analysis of Maxwell's Demon devices suggests that E=S.k.T, where S is measured in nats, or E=S.k.T.ln(2), where S is measured in bits. Indeed, it is possible to consider Moore's law continuing up to the Planck limit (NS, 02-Sep-2000, p26).
If all the matter of the universe is made of information (NS, 17-Feb-2001, p26), the universe itself can be considered to be a vast quantum computer (just as people in the previous industrial revolutions have considered the universe to be like a giant system of wheels, a giant heat engine, or a giant conventional computer). Conversely, advances in quantum computing will usefully feed back into our formulations for summaries of how the universe works.
Some suggest that quantum computing could beat the Turing Halting Problem (NS, 06-Apr-2002, p24; NS, 19-Jul-2014, p34) though for the present, the travelling salesman problem is of course known to be comutable. Chaitin indicates that there is a hierarchy of Omega numbers that would remain forever non-computable (NS, 10-Mar-2001, p28), though this is later questioned by a demonstration of the computation of the first 64 bits of Omega (NS, 06-Apr-2002, p27). It appears that the polynomial heirarchy (PH) can be beaten by bounded-error quantum polynomial (BQP) algorithms (NS, 09-Jun-2018, p7). Many feel that it will eventually lead to a more general form of the Turing Halting Problem (the uncertainty in the result information, times the uncertainty of the execution time, times the temperature is greater than ℏ/2). Whether any new restrictions count as different, or a mere rewording of the original halting problem, might just be a matter of taste.
There is even a proposal to consider a Quantum Gravity Computer (NS, 31-Mar-2007, p30). In such a machine, output does not necessarily need to follow input. "GR says that the causal structure can vary (since there is no such thing as 'simultaneous'), and QM says that anything that can vary can be in a superposition."
Intriguingly, quantum mechanics is positioned at the fine boundary of self-organised criticality, between classical physical behaviour and weird interconnectedness (NS, 26-Feb-2011, p36). This boundary appears to be a pre-condition of interesting (non-intuitive) chaotic behaviour. The coexistence of symmetry and chaos (NS, 09-Jan-1993, p32) is well studied.
Conway's Game of Life is mathematically interesting because it is poised on the thin boundary between boring crystalline stasis, and unruly gaseous randomness, in an interesting region of chaotic behaviour. If we were to relax or tighten any one of the game's simple rules, the game would revert to boring uninteresting behaviour. All the interesting things in this universe, such as consciousness, society's dynamics (NS, 06-Jul-2013, p26), evolutionary life, physical properties of water, seem to be those that teeter on this chaotic behaviour boundary between cold stasis and hot randomness. The second law of thermodynamics, and the speed of light limitation, are just two such rules in the universe. Relax or tighten any of these, and the universe would not work any more, in the way that it currently does; and atoms, stars, planets, and sentient beings would cease to be possible.
Gell-Mann and Hartle (1990) describe a mechanism for decoherence, involving matrices that become successively dominated, at each particle-particle interaction, by the terms in the leading diagonals. Thus, it is the repeated particle-particle interactions that lead to the decoherence. Observations and measurements are independent of this, but (as it happens) first require there to have been plenty of particle-particle interactions. Buffeted by the solar wind, Mars is at one specific position in its orbit round the sun, not smeared out probabilistically in superposition all the way round; and even specks in the depths of space are being bombarded by photons of the 3K cosmic microwave background. To have the values of each particle's matrix nudged to new values, at each collision, sounds remeniscent of how the Logistic Equation works (xn+1=r.xn.(1–xn)). Even this simple deterministic equation can go chaotic and, to all intents and purposes, unpredictable (at values of r=3.7, for example), with no implications of spookiness, and an associated temperature compared to that of black-body radiation (NS, 02-Feb-1992, p29).
The scientific method is completely objective, but still sufferes from two sources of non-objectivity. Firstly, what gets investigated is biased towards what gets funded, which is strongly influenced by fashion, public opinion, and politics; Secondly there is a fundamental systematic error in the process: every question that we have ever asked of the universe is one that has been framed by a human mind. Relativity and quantum mechanics are evidence that we are able to stretch that thinking away from what is intuitively comfortable to us, but the process becomes very long and drawn out. The method certainly eliminates theories that are inconsistent with the workings of the universe, but it still might be that because we design our experiments to investigate the properties of photons, that we appear to get answers that look like the properties of photons (NS, 24-Jul-2004, p30). The adage, "if a hammer is the only tool you have in your tool-box," could cause us to wonder if we should be finding some new tools. Perhaps, at the very least, the quantum mechanics experiments are telling us that something is wrong with our understanding, like an experimental version of Reductio ad Absurdum in mathematics. It might be that one (or more) of our axioms is wrong (such as the existence of quarks and electrons, or forces like gravity). Toffoli (1990) makes a case for why a lot of what we observe in our physics experiments is an artefact of those experiments, or rather our model of what the universe is, rather than intrinsic to the universe itself; he shows that special relativity and general relativity might even be such artefacts.
The experimental results concerning Bell's Inequality are telling us that something is wrong with our understanding, like a Reductio ad Absurdum proof in mathematics. One hundred and thirty (or so) years ago, chemists were experimenting with electricity, and gas discharge tubes. Their results seemed to be understandable if the gases consist of molecules and atoms, and if atoms consist of electrons and nucleons. So, that's where the particle notion got introduced. What, though, if there is another explanation that consistently explains all those results (I have no idea what, by the way) and Bell's inequality is telling us to drop the particle assumption?
Rather than our brains analysing incoming signals, finding patterns of ever-increasing complexity, and making sense of them by matching them against the internal representations, it is the other way round (NS, 09-Apr-2016, p42; and also p20): our brains generate the sensory data to match the incoming signals, using internal models of the world (and body), thereby giving rise to multiple hypotheses, with the most probable one becoming tagged, Dennett-like, as being our perception, using a type of Baysian analysis (NS, 31-May-2008, p30). This could be the mechanism whereby hallucinations are signs of a correctly working brain in the absence of sufficient sensory input (NS, 05-Nov-2016, p28); it also has parallels to the scientific method (coming up with models to explain the observed data, actively setting out to observe new data, and keeping the model no more complicated than necessary) being not so much an invention of past philosophers, as just the way that the human mind has been working, naturally, anyway. Each hypothesis can then be refined in the light of the error signals that are generated. The mere repeated occurence of this process might also give the brain a continuous assurance of its identity (NS, 03-Sep-2016, p33) and to feed into other, what were previously considered metaphysical, questions (NS, 03-Sep-2016, p29). Consciousness might be just a shortcut that has evolved for handling data compression (NS, 25-Nov-2017, p44).
The model of thinking, including deductive logic, that we have maintained since the ancient Greeks, might need to be superseded (NS, 27-Feb-2016, p34). Scientific method might be too stringent, and might need to entertain theories that will always be beyond experimental testing (NS, 27-Feb-2016, p38). Applying Baysian statistics to Popper's criterion leads to the idea that scientists actually spend their time building up the weight of confirming evidence, rather than on looking for a single example of contradictorary evidence (NS, 10-May-2008, p44).
Boltzmann brains (NS, 18-Aug-2007, p26; NS, 28-Apr-2007, p33) are self-aware entities in the form of disembodied spikes in space-time (more common in regions of high entropy than low entropy). Sean Carroll presents a counter-argument for their possibility (NS, 18-Feb-2017, p9).
Davies suggests that the laws of the universe are evolving (NS, 30-Jun-2007, p30), and hence that the answer to a question like, 'Why these laws and not any others?' is like asking, 'Why these species and not any others?' (NS, 23-Sep-2006, p30). Hartle considers this the other way round (NS, 01-May-2004, p34), that we and the other species on the planet have evolved to treat 'now' as a special concept because the universe works that way (for example, a force is a function of the present state, with no memory of past or future states). Similarly, free will is also rooted in the present (the past has no flexibility when it comes to shaping the future). However, memory is always involved: as soon as a system consists of more than one component (such as atoms in a crystal, or stars in a galaxy) any probing (such as by a hot body, for example) of the state of the whole system will obtain an almost immediate result from the nearest component, but the time-delayed result from the components further away. The speed-of-light limitation creates delay-line memory into the system, and capacitive laws into the behaviour of the overall system.
The speed of information can be faster than the speed of light in the given medium (faster than the group velocity), albeit not greater than the speed of light in a vacuum (NS, 18-Oct-2003, p42).
Deacon (2012) seems to indicate that emergentist analysis is to reductionist analysis as bottom-up design is to top-down design. In each case, analysis or design, the two approaches are complementary, and there is a place for both, since there are things that one does that the other does not.
Work is the quantity of energy transferred from one sub-system to another without an accompanying change of entropy (so must, therefore, be accompanied by energy transfer that does change the entropy). Work is just a form of energy in the equations of thermodynamics, at one level, but takes on centre stage in those equations at the emergent level above (where the phrase "takes on centre stage" is some sort to reference to information and relevance). Information is required to turn energy into work (NS, 23-Jun-2012, p8). It is constraints that are required to turn energy into work, where a piece of apparatus, such as a cylinder and piston arrangement, can be viewed as a catalyst (NS, 24-May-2014, p30) and that constraints are measured by Shannon entropy (NS, 12-Aug-2017, p40). Ellis similarly argues for top-down causation (NS, 17-Aug-2013, p28), and Malafouris argues the case of the human cognative prosthesis (NS, 07-Sep-2013, p28).
The chaotic balance between repulsion versus attraction, and frustration versus funnelling (NS, 09-Jun-2001, p32) and constraint versus energy gradient. Deacon has a name (teleodynamic) for systems that build up structure to flow-away the available heat difference as fast as possible, like convection currents, Bénard cells, and braid plains in the sand (NS, 02-Sep-2000, p97); and the other for the next level of complexity up from that, of morphodynamic systems that try to conserve the reservoir of energy difference for their own perpetuation (like living cells).
The Big Bang can be viewed as being a phase change, from random behaviour to a more clustered state (NS,17-Mar-2018, p30). Consciousness and the others (each one an emergent behaviour) can be treated as a region of phase change (NS, 12-Apr-2014, p29; NS, 26-Nov-2011, p34) which appears related to how the holes in network topology make us smart (NS, 25-Mar-2017, p28) and the use of Algerbraic Topology to investigate the 7-dimensional (or higher) sandcastles that build up and collapse down (NS, 30-Sep-2017, p28) in self-organised critical neural systems (NS, 27-Jun-2009, p34). Emergent behaviour is partly the use of abstraction (for example when passing from physics to chemistry to biology to psychology to sociology) to allow us to handle the dimension of increasing complexity (NS, 14-Feb-2009, p36) but genuinely describing systems with different laws, so that the reverse process, of reductionism, is impossible (NS, 10-May-2008, p52).
In the example of the piston moving in a cylinder of an internal combustion engine, all the text books take it for granted that the piston only moves in one direction (positively or negatively), and then proceed to do their calculations on P.V=n.R.T, and W=½.m.v2 et al, even to the point of simplifying the algebra as to dealing with scalers. Whereas, the significant thing about a piston moving in a cylinder is that the exploding gas molecules really do want to diverge in all directions in three dimensional space, and that the really inventive thing about the design is that most of them are prevented. This leads on to the fuzzy boarderline between droplets in a volume of gas, or bubbles in a volume of liquid; or a half-land/half-ocean a planet that can either be considered as having an island or a lake; or whether the pattern of the prime numbers is foreground or background; and also in network theory where rich connected networks (including the internet, 6-degrees of separation in human society, and neural nets in the human brain) where the gaps in the network that are just as significant as the connections. This, in turn, leads to the chaos-theory boarder-line between stasis and randomness, created by strategically placed gaps as well as strategically placed material, along which interesting-behaviour emerges along the knife-edge that is trodden by Mandelbrot sets, Conway's-Life, consciousness, and biological life.
Examples of hidden symmetry (NS, 03-May-2014, p36) include a pencil that had been balancing on its point, that could have fallen in any direction, but in fact fell in this particular direction; or a pack of cards that could have ended up shuffled in any order, but whose symmetry is even broken by onlookers who single out a particular sequence as being remarkable (the same role that is played by survival-of-the-fittest when the pack of cards is a randomly evolving genome). A message only contains information if an alternative (counterfactual) message had been possible (NS, 24-May-2014, p30); and the process of wave-function collapse (wave-function collapse of a particle that was in a superposition of states) is another example.
Born out by the different types of engineering being based on their respective forms of energy (within the context of the first law of thermodynamics) that can be harnessed for energy transfer or for information transfer. According to Davies, the Hawking-Bekenstein formula for the entropy of a black-hole, the information content of the universe, in nats, is given by IU=G.(MU)2/(ℏ.c). Further, since this would be expected to have increased to its present value, starting at unity at one planck time after the big bang, this would lead to IU(t)=(t/tpl)2.G.(MU)2/(ℏ.c). Since the universe is now 1060 planck times old, this gives an information content of about 2x10121 nats, consistent with it having a mass of about 1053 kg (NS, 16-Dec-2000, p26; NS, 19-Oct-1996, p30).
Perhaps we need an extra law of thermodynamics (NS, 29-Oct-2005, p51). The new law would need to indicate the rate at which structure is built up (the rate at which sand grains are moved around by river flows, and the rate at which complexity builds up in DNA-based genonomes, perhaps as considered by Lloyd 1990) in the presence of a given amount of energy flow under the second law of thermodynamics (like a sort of Strouhal number for energy flow).
The 4th law is really concerned with the values of the three constants of proportionality, in Fourier's, Newton's and Stefan's laws of cooling. From experience, such as from microprocessor chips on PC motherboards, we know that the constant for radiation, despite its very agressive dependence on the difference of temperatures raised to the fourth power, is extremely small, and dwarfed by any conduction routes that are available to the heat, and that the constant for convection is even bigger. For the first two, the constant of proportionality, C, is a function of the ambiant temperature, C(T). The indication is that the constant of proportionality is also punctuatedly, but monotonically, dependant on time, C(T,t), where C(T,t2)≥C(T,t1).
The simplest single-celled microbes, and also for their pre-cellular precursors, and perhaps even non-biological matter would find itself being structured to make use of this source of energy (NS, 18-Mar-2017, p11). The universe is far from its equilibrium state, thereby allowing structures to form that aid the flow of energy towards the universe's equilibrium state. In its rush to use up the available energy as quickly as possible, temporary, local structures form (NS, 21-Jan-2012, p35). Examples of these spontaneously forming structures include convection currents, planetary weather systems (NS, 06-Oct-2001, p38), auto-catalytic BZ reactions (NS, 21-Jan-2012, p32), protein-based life (NS, 09-Jun-2001, p32), and convergent evolution (NS, 21-Jan-2012, p35). Viewed in this way, humans are just the latest inadvertently evolved structures that help the universe use up its surplus energy supplies (NS, 05-Oct-2002, p30), which is a role that human beings seem to be taking on with great enthusiasm.
Zurek describes how S=H+K, where H is the Shannon entropy of the given sequence (statistical uncertainty), and K is the algorithmic complexity (algorithmic randomness), with a gradual decrease in the former balanced by a corresponding increase in the latter (but only made possible in a far-from-equilibrium universe, such as ours). Kondepudi, proposes that natural selection, and the processes of evolution, act on K, to keep it as low as possible, but that it is only H that allows machines to do work, and hence that for Maxwell's Demon, E=H.k.T.ln(2). Bennett showed that it is the act of erasing a memory bit that incurs the cost. Blank memory is like a cold sink at absolute zero, with the lowest Shannon entropy, H, even though writing useful memory subsequently increases the algorithmic information content, and reduces the algorithmic entropy, K.
Sagawa and Ueda took this further, and proposed (born out by experimental results) that an extra term needs to be added to account for mutual information, to account for the way that the act of measurement leads to a correlation between the system and the apparatus or its memory (NS, 14-May-2016, p28).
Tsallis proposes a formula for computing the entropy of an out-of-equilibrium system that happens to give the correct power-law answers (NS, 27-Aug-2005, p34). In this, probabilities are expressed as pq, which generates Boltzmann statistics for systems that are close to equilibrium, with q close to 1, but also seems to work for higher values of q that have external energy sources and that are far from equilibrium.
The proposed fourth law, for the open, out-of-equilibrium system, along with its emergent behaviour, might take the form of a 'principle of increasing complexity' that explains how quickly life evolved, and consciousness too, over and above Maynard Smith's observation (NS, 05-Feb-1994, p37) as to why organisms tend to become more complex, all as agents to use up the available energy faster (NS, 05-Oct-2002, p30). In any haphazard system, unstable structures will quickly die away (by definition), leaving the more stable ones to persist (also by definition), established like the nodes in a standing-wave, giving the impression of a 'non-random eliminator' (NS, 07-Jan-2006, p36). One strong candidate would be the Principle of Maximum Entropy Production (NS, 06-Oct-2001, p38), not too unlike the electronic engineering concept of 'matching', that allows maximum power transfer (energy flow) from an input to an output.
Human activities can be viewed as convection currents in the biosphere, spontaneously setting themselves up to release energy that had been blocked or locked away in fossil reserves, to smooth the flow of ever greater amounts of energy from the hot end (the sun in our case, beating down on our planet's surface) to the cold end (ultimately the CMB). Similarly, the biosphere is a convection current in the inorganic atmosphere, which, in turn, has its meteorology. It appears that all these various levels of convection currents take on a fractal structure. It is also possible to identify intermediate, finer-grain levels: within human activity, there are the moments that the Agricultural, Industrial and Information Revolutions were sparked; and within the biosphere, there are successive nested levels caused by the evolution of amphibians from fish, reptiles from amphibians, and, specifically, the emergence of eukaryotes in the first place. Compare this with body-part complexity (NS, 05-Feb-1994, p37) and with Deacon's teledynamics (for energy channelling) versus morphodynamics (for energy sequestering).
Crutchfield and Young propose a way to measure complexity. Chaisson suggests using energy flow density, measured in ergs per gram per second (NS, 21-Jan-2012, p35).
Many argue that life in general, and RNA life in particular, are extremely easy to get started (NS, 20-Aug-2016). Revised versions of Drake's equation exist (NS, 25-May-2013, p6), complete with revised definitions of the habitable zone (NS, 08-Jun-2013, p40). However, though life got started extremely early on the newly-formed planet, it got stuck at the single-cell organism stage for 3 billion years, and the jump to multi-cell life seems to be the main hurdle (the extreme unlikelihood of the re-emergence of multicellular life (NS, 23-Jun-2012, p32), precisely on the grounds of the laws of thermodynamics, and the need for a eukaryote-like serendipitous discovery of how to farm mitochondrial energy packs (NS, 23-Jun-2012, p32)). Others, though, suggest that the step from single cell to multi-cell organisms was not such an unlikely event after all (NS, 27-Jun-2015, p11), with a similar internalisation of bacteria leading to chloroplasts, and perhaps even to cilia and flagella (NS, 10-Mar-2018, p54).
For over a century, the academic world has concentrated on the role of the specialist: experts who know more and more about a narrower and narrower field. This is inevitable; the capacity of each human mind is finite, so the depth of the knowledge can only be increased if the width is restricted. However, the world still needs generalists, whose knowledge is not so deep, but integrated over a far wider span of subjects (NS, 30-Mar-2002, p54). Such people have, at their disposal, a very powerful method of problem-solving, and are able to find solutions to problems by analogy and cross-pollination across otherwise unrelated subjects. In addition, they are, indeed, also the integrators, bringing the work of the specialists together to the benefit of society.
Although not a formal academic source, the New Scientist magazine is a particularly appropriate one to use as a resource for this. As a weekly magazine in popular science, it provides an overview across a wide spectrum of the newest developments in science and engineering (and, in any case, can, of course, be followed up later in the more formal academic references). The aim is to find a "whole that is greater than the sum of the parts", but keeping aware that this will sometimes be confounded by Gödel-like inconsistencies.
This page has its inputs from reports of experiments, and it attempts, by juxteposition of otherwise unrelated reports, to infer, or to mine, conclusions that had not been apparent to any of the individual experimenters (NS, 30-Mar-2002, p54).
Quantum interaction is a field of study that uses the techniques of quantum mechanics, and recognises their applicability in more general fields (NS, 03-Sep-2011, p34; NS, 11-Dec-2010, p10).
We should not be surprised to see stable patterns persisting, since that is a tautology: by definition, stable patterns are the ones that do persist, while the millions of unstable one disappear quite quickly. This is connected with Dawkins, and the selfish gene, but the point is much more general, and not restricted to living organisms.
With forced resonance, for example, a washing line hung with a series of pendulums, each of a different length, when you sway the line at a given frequency, one pendulum happens to be tuned to that frequency, and sways energetically in time; but many of the others do, too, in a complex way, that extracts the energy from the incoming swaying, at its given frequency, but superimposed on their attempting to swing at their own natural frequencies. This sounds analogous to open systems (water falling down a channel under gravity, heat being applied to the bottom of a pan of water, single-celled organisms having nutrients flowing in at one end, and waste products out at the other), forced to live to the beat of the energy flow, despite really wanting to apply the second law of thermodynamics, and to die down in peace.
Given any random or chaotic environment, patterns are bound to form from time to time. The stable patterns will always out-live, and in that sense dominate, the unstable ones, by definition. There is a particular special case of this where those stable patterns have the property of inheritance (and hence of evolution). Evolution follows whenever there is also a mechansim for inherentance, in sporned copies of a given pattern, or persistance, in the chronologically later manifestation of the given pattern.
This leads to the idea of self-writing narratives (Dennett 1991). The idea of starting to write a story, or a non-fiction description, and then realising that there are anomalies and inconsistencies in the narrative (on an earlier page, character-A said this, so why, on this page, would character-B be believing this, or behaving like that?) Gradually, the story starts to write itself. I see the annotated draft pages of past revisions of a document as being like a static snapshot of a brain in the process of thinking something through. That is the same 'machine' that this web page is attempting to bring to bear on these New Scientist articles (plus paraphrases of quotes found elsewhere, notably on the Internet).
There is a fine line to be trodden, between the controller gving the direction for the story, and stiffling the message that is starting to emerge naturally.
This is just a brief section to note that religion cannot be dismissed as easily as Hawking indicates. Apart from the idea that the mammals and insects have been tamed by the flowering plants since the demise of the dinosaurs (and the Homo sapiens species by the cereals in particular) another view is that human beings could be merely the latest layer of convection currents to be set up, as a result of the second law of thermodynamics. In the same way that structures, such as convection currents in pans of fluid being heated over a flame, establish themselves precisely so that the hotter particles can flow unimpeded past the colder ones, in the process of the heat flowing from the hotter parts to the cooler ones, so humans might be merely agents for digging up any sequestered fuel reserves they can find.
Hawking was fond of pointing out that asking what was before (or will be after) the universe is as meaningless a question as asking what is north of the north pole, or south of the south pole. However, even though we cannot expect the questions, "where is the planet earth situated? what is either side of it?" to be answered with reference to degrees of latitude that are greater than 90°N or 90°S, we can still expect the questions to be answered (indeed, astronauts can always have a location specified in polar coordinates). Hawking continues that defining God as the embodiment of the laws of nature Is merely substituting one mystery by another, but this claim works both ways. To give a label to what is beyond the beginning and end, Alpha-Omega (ΑΩ) for example, is all part of normal human practice, as a tag that can be used in normal human reasoning (NS, 08-Oct-2016, p52). Admittedly, this argument does not claim to attribute any type of will or personality on AlphaOmega, and does not address Hawkin's concerns over miracles (infractions of the laws of physics).
Tegmark notes how symmetry is the one property that a body of mathematics can exhibit, based solely on its internal construction (without the need for a third party vantage point outside), and that, bootstrapping up from there, this leads to all the properties of the universe that we observe with science. Certainly, Noether's Theorem might suggest that this is not an unreasonable hope (since it does, indeed, show that symmetries lead to the conservation laws that we observe.)
There is also the idea of ours being a participatory universe. Since the universe's information content is limited by its size, in the early universe, maybe it was more amendable to the retro-causality of quantum post-selection, which has been demonstrated in the lab and over a satellite transmission, the universe could have fine-tuned itself for the conscious beings that would eventually evolve to observe it. Quantum post-selection already gives a variant of the anthropic principle, but might this extends it even further to some sort of quantum Darwinism, acting so that the fittest wave-function collapses predominate, and give the impression of there being an objective reality to be measured by independent observers.
Quantum post-selection (NS, 30-Jun-2007, p18; NS, 30-Sep-2006, p36), has now demonstrated over the 3500 km distance of a transmission to a satellite and back (NS, 04-Nov-2017, p12). Moreover, wave-function collapse has been shown to be not an instantaneous event, but one that takes time (NS, 10-May-2003, p28), and by performing only weak measurements each time, can therefore be interrupted or even reversed (NS, 12-May-2007, p32).
From our perspective, there is no definite history of the universe. Using the present state of the universe as the input to infer its origins is more valid, and certainly more practical, than attempting to obtain the present state of the universe as the output (NS, 22-Apr-2006, p28). Indeed, the post-inflationary state of the universe might be derived from a Feynman diagram analysis of the 10500 initial states that string theory implies (NS, 28-Jun-2008, p10).
Quantum post-selection and quantum Darwinism might lead to some idea of future humanity being part of that Alpha-Omega, with us, now, as part of its creation.
Talking about time runs up against the impossibility of our attempting to think what anything might be, outside of time or space (since, following Descartes, to think involves both time and material matter). Ultimately, it becomes tied up with the question of why there is something, rather than nothing, and indeed what it would even mean for there to be nothing (NS, 08-Oct-2016, p52). One encouraging possibility, though, is that paintings and sculptures are human artefacts (and therefore manifestations of thoughts and story-telling) that are static structures, outside of time (as opposed to dynamic structures like music, literature, dance).
An idea of it being both amongst us and surrounding us would be consistent with there being more than the usual 3+1 dimensions. The idea of it being outside of time, though, is more problematic, since that automatically precludes the idea of it creating, observing, thinking (though noting the exception of paintings showing how even man-made static structures can still tell a dynamic story).
Ever since the universe was more than 5.39x10–44 seconds old, the universe appears to have adhered to all our current laws of physics. However, since the laws of physics of the universe must reside inside the universe, not as some abstract concept outside, they can only be as precise as can be calculated from the total information content of the universe. Allowing our theories to work with an infinite number of numbers, quotable to an infinite number of decimal places, shows that mathematics is an approximation of the universe rather than the other way round (NS, 17-Aug-2013, p32).
Deutsch and Marletto propose a way in which the laws of physics arise as 'constructors' that work on information (NS, 24-May-2014, p30). Not only would this offer an explanation for how the laws of physics arise, but it could also give an explanation of what constitutes knowledge (NS, 01-Apr-2017, p30), and the role of a knowledge creator (like a conscious mind).
Power is the rate of change of energy in time, and, by the same token, force is the rate of change of energy in distance; but force is also rate of change of momentum in time.
The lattice has been arranged so that by starting at any given node, travelling southwest involves taking the d/dt, travelling southeast involves taking the d/dx (and vice versa with integrals for travelling northeast and northwest).
In the lattice: P=power, E=energy, F=force, p=momentum, m=mass, L=action, mx=mass*distance, X=dp/dx=dm/dt, Y=dm/dx and Z=∫L.dt=∫m.x.dx.
It is surprising how much of physics can be inferred from simple thought experiments, based of course on the corresponding experimental observations.
Galileo noted that if you think, as most people did, that a heavy body, m1, will fall faster, at v1, than a light body, m2 falling at v2, there is a logical inconsistency if you next tie the two of them together with a short length of cord or chain, and drop the two of them together again. By the above logic, m1 should be trying to fall faster than m2, and be pulled back by it, while m2 will be trying to fall slower than m1, and be pulled downward by it. So, the two will fall together at some sort of average velocity, such as (v1+v2)/2 or √(v1.v1). But, if they are falling together with a taught cord or chain, they are acting as a single body with mass m1+m2. But by the above logic, that should mean that the two of them tied together should fall even faster than v1 or v2 alone. Similarly, the same logical inconsistency arises if you assume, somewhat bizarly, that the heavier object falls slower than the lighter one. The only consistent conclusion is that v1=v2=(v1+v2)/2=√(v1.v2)=v12. All bodies, dropped simultaneously, irrespective of their mass, accelerate under gravity by the same amount, and hence end up travelling at the same velocity as each other, at each moment in time.
Next, assume that the KE of an object is a function of mass and its velocity: KE=F(m,v). Suppose that the object is a ball of putty that is heading for a wall. When it hits the wall, the first law of thermodynamics suggests that the KE is turned to heat (the wall gets a bit warmer). If you repeat the experiment, the same amount of energy will be delivered to the wall, again. Therefore, if you take two balls of putty, and throw them towards the wall, simultaneously, and at the same velocity, it will deliver twice as much energy as when you threw just one ball of putty at the wall. Similarly, if the two balls of putty are now lightly stuck to each other, and travel as a composite ball with mass 2m. So, KE appears to be linear in m, and we can say that KE=m.f(v).
Now, we change the experiment, as sketched out in Coopersmith (2017) and have two balls of putty, each of mass m, and velocity v, heading for each other. (We can worry, later, about one of them having velocity v, and the other -v.) When they collide, the come to a halt, and both heat up from their combined KE, KE=2.m.f(v). Suppose, though, instead you had been in a drone above one of the balls of putty, travelling at the same velocity as it. To you, that ball of putty would be stationary, and the other one would be approaching it at 2v. At the collision, the two of them suddenly end up travelling at velocity v away from your still speeding drone. So, the heating of the two balls is m.f(2v) less the kinetic energy that the new conglomeration still has afterwards 2.m.f(v). Given that the experiment is the same in both cases, just observed from a different frame of reference (from the roadside, or from the drone) the amount of heating must be the same: 2.m.f(v)=m.f(2.v)-2.m.f(v). So, f(2.v)=4.f(v). We can conclude that f(v)=k.v2.
The statement earlier, that all bodies, dropped simultaneously, irrespective of their mass, accelerate under gravity by the same amount" leads to v=g.t, and more generally to v=u+a.t. It follows from this that k has the value ½ (since the area of the triangular region defined by a sloping straight-line graph is ½.b.h).
Fourier analysis gives Δk.Δx=1/2 as an inherent limitation (the more fine and precise a pulse is in time, the wider the band of frequencies needed to define it) where k is the wavenumber, or the spatial frequency, equal to 2π/λ. Then, with a simple substitution of E=hf, Heisenberg's Uncertainty Principle drops out: ΔE.Δt≥ℏ/2, Δp.Δx≥ℏ/2 and ΔL.Δθ≥ℏ/2.
Since the wave-function of any fundamental particle, such as a photon or an electron, can be considered to consist of a carrier wave (eikx) multiplied by an envelope (e–a.x2), Heisenberg's uncertainty principle is, in effect, talking about the bandwidth of the particle due to the side-bands of its amplitude modulation.
It seems that measuring one of a separated but entangled pair of particles can be used to improve the uncertainty on one of the measures, down to zero, but still not the other measure; Heisenberg's uncertainty principle and entanglement are just two sides of the same coin (NS, 30-Apr-2011, p28), with either one applying, or the other, depending on the nature of the experiment. Indeed, it appears that it is that entanglement that keeps physics from being even weirder than it actually is (NS, 21-Aug-2010, p33). There are calls for a quantum thermodynamics (NS, 07-Apr-2018, p32).
Waves can take all paths, Feynman-like, from one place (in phase space) to another. Each one, though, tends to interfere destructively with those taking slightly longer or shorter routes. Only the path of least action, with no shorter routes available, ends up not being completely cancelled out.
This then implies Heisenberg's uncertainty principle, as noted earlier, by expressing it in terms of information theory (NS, 23-Jun-2012, p8). Indeed, the thought experiment of throwing a stone into a pond already hints at this connection. The kinetic energy of the stone arriving at one point on the pond's surface means that that energy is initially concentrated, and will either tend to diverge from there by the statistical mechanics of the second law of thermodynamics, or by the uncertainty in duration for the concentration of energy at a restricted point. This centres either on a E=kT or E=hf relationship, respectively. After that, once launched divergently, the energy continues to flow in a state of motion in a radial straight line, unless acted on by another force (Newton's first law of motion, but as modified by Einstein and by Schrodinger's equation).
In the case of 22 snooker balls on a completely frictionless table, in which one ball is suddenly given E units of energy, the first law of thermodynamics says that one invariant property, from that point on, is mean(energy)=E/22, the second law says that stdev(energy) starts large and gradually reduces, and the third law, endorsed by Heisenberg's uncertainty principle, says that stdev(energy) will never reach zero.
We are fortunate in the way that inductive logic works in our universe: the chemical experiment or the clinical trial that we do today, in the controlled environment of the laboratory, are valid indications of how they will work tomorrow in the shopping mall. Not only are these symmetries already very useful in their own rights, but Noether showed that they lead to the conservation laws (NS, 25-Apr-2015, p33; NS, 27-Jul-2013, p50): the Copernican principle leads to the law of conservation of momentum; the temporal equivalent leads to the law of conservation of energy (the first law of thermodynamics). Taken together, the simultaneous conservation of energy and of momentum lead to the second law of thermodynamics (two balls into a Newton's cradle leads to two balls out, rather than one ball twice as high, or three balls two-thirds as high, for example).
Heisenberg's uncertainty principle implies vacuum energy (the uncertainty of the energy, and hence the energy itself, can never reach zero) and also the third law of thermodynamics (the uncertainty of the temperature, and hence the temperature itself, can never reach zero (NS, 18-Mar-2017, p10)). The existence of the vacuum energy might then have implied dark energy, had it not been a massive 120 orders of magnitude out in its experimental predictions (NS, 01-Nov-2003, p34), which, it has been suggested, might be due to a missing leakage term (NS, 27-May-2017, p28).
Special relativity follows from the Principle of Relativity (constant motion of the system cannot be determined by observations made completely inside the system), and the Relativity of Simultaneity (simultaneous events in one context are not simultaneous in another). General relativity follows from generalising the Principle of Relativity still further (acceleration cannot be distinguished from a gravitational field).
Peres showed that the axioms of quantum mechanics necessarily lead to the second law of thermodynamics, and to Schrodinger's equation being linear.
In line with, but slightly contrary to the article in the 13-Oct-2012 issue of New Scientist magazine, Noether's Theorem (or perhaps the principle of least action) might, instead, constitute the long-sought theory of everything.