![]() |
English | |||||||||||||||||||||||||||
Home page | Services | Past achievements | Contact | Site map |
||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Page d'accueil | Services | Réalisations précédentes | Contact | |||||||||||||||||||||||||
Français | ||||||||||||||||||||||||||||
|
qi![]() |
This applies to anything, including electric charge, matter (such as road traffic, or water flowing in pipes), energy, or information. Whenever there is less coming out of a system (throughput) than there is going in, the difference must be delayed in internal storage (latency) and must be growing in magnitude. As matter accumulates, the pipeline becomes heavier; as energy accumulates, the energy store not only has more energy to release again, but it becomes hotter, scaled by the heat capacity of the system; as information accumulates, the internal memory becomes more occupied. |
In an attempt to combine the models of Shannon and Carnot, this leads to a possible generalisation of the diagram in which something is flowing through the module (qi to qo) with some of it being tapped off (notionally intentional work, W, and unintentional heat, θ) and with the possibility of injecting extra material with the input (intentional G or H transform functions, and unintentional noise). |
|
Depending on context, several of these can be zero. When considering the water flowing under a water wheel, or electrons flowing through a component, qo=qi, and all the other terms are zero (unless there are leaks in the channel). If the context is energy flow, then Eo<Ei since mgho<mghi and work (W) and internal heating (θ) are being tapped out of the system. For a Shannon communication channel, no work is tapped off, but the noise input (N) is significant. (It is noted that this description presently mixes the notions of the input and output of energy and signal, as in the distinction between electrical engineering and electronic engineering, and hence in the mind of the human engineer.)
Conversely, whenever there is more of anything coming out of a system than there is going in, the excess must be coming out of internal storage, and must be diminishing in magnitude. It can only continue, though, until the initial capacity has been exhausted. Capacitor discharge, and Newton/Fourrier's laws of cooling, suggest an exponential tailing off of such an excess, a half-life, since it cannot continue into negative values.
The thermodynamic entropy is increasing during this process, though the Shannon entropy of the given syntactic sequence (statistical uncertainty) can either be increasing or decreasing. Tidying a bedroom can result in the lowering of the Shannon entropy, but involves a large amount of metabolism and respiration, so an overall increase in thermodynamic entropy. Likewise, the abiliy of the living cell to reduce the entropy of its innards, at the expense of it increasing externally (Zurek (1990)). Kondepudi (1990) notes that natural selection, and the processes of evolution, act on the Shannon entropy, to keep it as low as possible, but that it is only S that allows machines to do work, and hence that for Maxwell's demon, E=S.k.T measured in nats, and T is the temperature of the heat-sink. Thus, this has a connection with the proposed fourth law of thermodynamics.
Blank memory is like a cold sink at absolute zero, with the lowest Shannon entropy, even though writing useful memory subsequently increases the algorithmic information content, and reduces the semantic, algorithmic complexity (algorithmic chaotic unpredictability). Sagawa and Ueda (NS, 14-May-2016, p28) took this further and proposed (born out by experimental results) that an extra term needs to be added to account for mutual information, and the way that the act of measurement leads to correlation between the system and the apparatus of its memory. It can be further noted that, as chip-testers on microelectronics fab lines can testify, sequential logic hardware (with memory behaviour) exhibits much more complex behaviour than does combinatorial logic.
We live in a universe in which symmetries are apparent, leading to their consequential conservation laws. It makes no difference to the results that we obtain from a physics experiment if we run it one minute earlier than we had originally planned, or one minute later, or with the apparatus displaced one metre to the right, or one metre to the left, or orientated on the bench one degree round to the west, or to the east. That is only true of time / position / orientation independent results. If the physics experiment involved making an astronomical observation, then it is not directly independent of these. To say that the orbit of the planet Mars is characterized by x2/a2+y2/b2=1 seems to abstract away, and to allow for Mars to be anywhere, and everywhere, round its orbit, while to say x=a.cos(ωt+α) and y=b.sin(ωt+α) introduces the parametric notion of time (where ω is a constant, at a frequency that is a characteristic of the internal construction of the system, and α represents the broken symmetry).
There can be a divergent element to the process: which way down the hill, which direction of fall for a pencil balanced on its point, which fragments of a previously intact porcelain tea-cup falling to the floor. Milk diffusing in a cup of tea, ink diffusing in a glass of water, possessions jumbling in a bedroom.
Only the symmetry-breaking step is truly divergent; all the subsequent steps are just the usual normal process of conversion of PE to KE, and then to heat and noise.
Similarly for the nucleus of a radioactive element that is teetering between decaying now, or not just yet, between the residual strong force's attraction beating the electrostatic repulsion within the nucleus, and takes it into the world of chaos theory. Meanwhile, a microprocessor, or long-case clock, leaves out the symmetry-breaking transition, and is always falling convergently.
Given that all the components in such a system are moving randomly, the only constants that can be noted in our gas laws are those that are derived from statistics, such as the mean behaviour, and the standard deviation.
We do not measure absolute energies, but the difference in the energy of interest from the base case. The potential energy of a 1kg lump of coal held 1m from the floor is 9.81J, unless the intention had been to let it fall to the ground rather than the floor of a top-floor physics laboratory, or down to sea level of one built near the ria of Southampton Water, or if we had intended the coal to be burned on its way down, or to undergo nuclear fusion.
The whole balloon of gas, colliding against a piston or turbine blade, can be considered to be a single Lagrangian particle, with a Strouhal-like ratio of nRT/½mv2 (where n is the number of moles of atoms of gas, and m and v are the mass and velocity of the whole assembly).
A computer processing unit will always fall down a highly controlled convergent path in the course of program execution, to the extent that program verification tools can predict how it will execute every time (or Newtonian laws of motion in a mechanical example). However, this behaviour only emerges from that of order-from-disorder, as discussed next.
With the second law of thermodynamics, energy naturally, and spontaneously, flows from regions of surplus to neighbouring regions of dearth. This is a convergent process: a ball at the top of a hill will inevitably roll down, at some time or other. What is diverging is a proportion of energy dissipated: the ball at the bottom of the hill has lost some potential energy from when it was at the top, similarly the falling pencil, the fragments of porcelain, the particles of milk or ink, and the microprocessor drawing energy out of the battery. Each of these can be reversed, but only by a system that is, in turn, divergently dissipating energy. So, the whole system (the ball being pushed up the hill again by a bull-dozer, the oscillating pendulum being kept in motion by the descending weight via the escapement mechanism, the computer being forced to execute backwards) when viewed from far enough away, simply looks like a convergent system divergently dissipating energy.
As soon as the behaviour becomes quantised, with platforms of punctuated evolution, any overall law will be grainy, but with a smoothing out that increases with the square-root of the component incidences. The binomial curve for the results of coin-tossing illustrates this well. For example, in the case of the decay rate in a lump of radioactive metal, supposing that the lump starts off with one mole of atoms, after waiting for period of one half-life, the lump will be down to containing a mean of 3x1023 atoms of the original metal, plus or minus a standard deviation of 4x1011, namely ±0.00000000013%. Similarly for the diffusion equation, and similarly in the double-slit experiment, when one particle is fired in at a time, and contributes to the diffraction pattern formed by later particles.
The presence of noise can indeed be a constructive component. For example, when using a digital odometer for measuring accurate distances, or when settling powders into an eqilibrium position, or a garden hose into a straight line.
Convergent processes are creators of hidden symmetry, while divergent processes are symmetry-breaking (needing extra bits to specify the initial conditions). The pencil balanced on its point is in a position of astable equilibrium, and can fall in any direction with equal probability. Once it starts to fall, though, and is not restored by local perturbations, it converges ever more strongly on continuing to fall in the same direction. When viewed in reverse, in the cine projector, the pencil will converge on the unique upright position; the broken porcelain teacup (or scrambled egg) will assemble to the only one correct jigsaw puzzle solution; and the block of uranium and lead will converge towards being a block of uranium, albeit perhaps not necessarily retracing the decayed atoms in the precise reverse order. Indeed, playing the cine film backwards in the projector, the system converges on a symmetric state (bits are not required to remember the initial conditions, such as which uranium nucleus had decayed next, or which direction the pencil fell in, but there is still the question of when the event happened (and is now unhappening in the projector).
Chaos can emerge in the simplest of situations, even if only three particles interact with each other, notably through the three-body problem (NS, 04-Apr-2020, p19). All macro objects, including protons, are just clusters, with a characteristic half-life (as discussed in the next section), and will eventually decay.
Digital systems are chaotic systems that are kept from rarely going chaotic. Increasing the number of atoms increases the probability of some of them veering several standard deviations astray, but equally increases the body of atoms that stay close to the mean.
It is possible to generalise the application of half-life as a statistical parameter, and to assign meanings to negative and imaginary half-life, and laws of cooling. Discharging of a capacitor (or, indeed, any decay-law type if memory), discharging of the heat from a hot region (law of cooling), discharging of a radioactive source (radioactive decay), each one obeys the same sort of negative-exponential law. The half-life of an event is the period of time over which, statistically, the chances of the individual events happening are 50%. By the same token, then, it can be used to characterise the arrival of individual photons at any given pixel of the back screen of the double-slit experiment. Traditionally, it is applied to the decay event of a radioactive atomic nucleus. However, it can be equally applied to any of the examples given earlier, as well as the chances of a molecule encountering another and undergoing a chemical reaction, or simply one vibrating molecule passing on some of its vibrational energy to a neighbour (a thermal conduction law of cooling).
An antique porcelain teacup, or a Roman statue, has an inevitable eventual state of being smashed, but has a half-life of its survival before then, during which it becomes an ever more valuable, and rare survivor. The antique porcelain teacup also has a half-life of remaining in the shopkeeper's window, before it is eventually sold at its exhorbitant ticket price. The half-life is not necessarily a simple constant, but a function of other parameters, such as the quality of the workmanship versus the ratio of the ticket-price over market-estimate.
Porcelain teacups are tending monotonically to the broken state, as is everything else, including: human lives (where the half-life is a function, as summed up in actuary tables); functioning computers; a given law of physics being refuted; buying lottery tickets (or tossing assemblies of coins or dice) until a winning combination comes up; conversion of the universe's matter to iron. On observing a birth, we can confidently predict a death at some point in the future, but with no indication of what might happen in between (other than taxes).
With a black-hole, an external observer can never see material crossing the event horizon, since the signal becomes asymptotically red-shifted, and weak. If this material includes an intrepid astronaut, the external observer cannot know if the astronaut fired his rockets at the very last moment, just before crossing the point of no-return. So, just as Bekenstein-Hawking used black-holes make a link to the laws of thermodynamics, so they can also be used to link to Popper's model of scientific method: we can come up with a theory that the astronaut eventually crossed the event horizon, and this theory can be refuted (by noticing the astronaut re-emerging, perhaps decades or centuries later) but no-one can definitively prove it.
The advancement of human knowledge is a monotonic path, provided there are not too many major set-backs like the burning of the library of Alexandria, or fall of the Roman Empire. Number of attendees joining an on-line meeting close to the alotted hour shows how it is connected, too, to queuing theory. Just as the binomial distribution can be shown to merge with the Poisson and normal distributions, so too with the power-law of chaos theory, and the violation of Bell's theorem.
The monotonic arrow of the second law of thermodynamics can be of either type, increasing or decreasing, as in the water clock whose read-out can either be on the upper reservoir or on the lower one. Dirty things left out in a place where the dirtiness is not being replaced will gradually become clean. Any monotonic function can be looked on as a resource, capable of driving a Crooks-like engine. Similarly, unstructured things can start to become structured, as in a well-formed computer program converging, perhaps by divide-and-conquer, on its solution, and as in Maynard Smith's observation on the evolution of complexity (NS, 05-Feb-1994, p37): so, the second law of thermodynamics can give rise to localised order, as well as to generalised disorder.
Monotonic tendancy for all structure being temporary, forever drawn to the position of least energy. Everything in the pipeline of the fourth law of thermodynamics will eventaually come out of that pipeline under the second law of thermodynamics. It will all unravel in the end.
The first law of thermodynamics follows from Noether's theorem. The others are captured by the laws of cooling (Fourier's, Newton's and Stefan's, respectively for conduction, convection and radiation): the net flow of energy is zero if there is no temperature difference between the parts; the rate at which energy flows from the hotter part to the colder part is positive, and from the colder part to the hotter part negative; and hence there can never be a place that can cool a region down to absolute zero. Finally, the laws of cooling are a type of measure of half-life: the greater the temperature difference between the parts, the greater the number of collision events as vibrating molecules pass vibration energy on to each other, with the half-life defined as the period at which the probability is 50%.
The rate at which energy flows from the hotter region to the cooler one is proportional to the temperature difference between the two regions. Rate of energy flow is power, and is automatically positive in the direction of flow from the hotter region to the cooler one. Similarly, Crooks' fluctuation theorem: if the dynamics of the system satisfies microscopic reversibility, then the forward time trajectory is exponentially more likely than the reverse, given that it produces entropy. In space, the energy flow from the hotter region to the cooler one are quantified as a force.
Regions of abundance of electrons discharge towards regions of dearth, just as energy flows from hot regions towards cooler ones. Similarly, with particles undergoing diffusion.
Cascades of overlapping and interleaved half-lifes (short along the conduction routes, and long across the channel walls). Equilibrium, and even punctuated evolution, since it is balanced between the half-life for decay and the half-life for increasing structure. That structure is a type of entropy (Shannon complexity) monotonically building as the system converges towards its building, and despite the fact that that structure will be constantly in a process of decay as the second law of thermodynamics ultimately wins out.
We could imagine writing a process-interaction simulator based on modeling all the half-lifes. A loose pile of leaves, swept one step down from a terrace, will have a shorter half-life in the direction of later being blown one step further down, than the half-life of being blown back up to a higher step. Similarly, molecules of water in a pipe might be more compressed at one end than the other (due to the pressure gradient being generated by a pump) and so the half-life on those molecules subsequently moving further down the pipe is shorter than that of their moving up the pipe, and infinitely shorter than that of their passing radially out through the material of the pipe walls. Half-life is just a parameter that is being used to measure probability, and is serving the same sort of purpose here as the probabilities that are applied in Feynman diagrams. Indeed, it is interesting that probabilities play a similar role in classical physics (in the realm of large number, bulk behaviour) as they do in quantum physics, despite having a different ontological origin.
Kirchhoff current. Newton-Raphson, with the half-life lower in the direction of convergence (and infinite where there is no initial guess). Resonnace and standing-waves. Parallax, and comparative experiments. QHL/Q0=1-2-t/ΤHL, hence Q=QHL/Q0=1-eln2.(-t/ΤHL), where Q is the charge, so dQ/dt is the current.
One useful demonstration would be to use this to show that Ohm's law is consistently emergent from this. (It is a bit of a circular argument, since Ohm's law was used in the derivation of the laws for capacity discharge, but showing internal consistency of the equations is still a reassuring aim.) We could consider a ladder network, like a chain of resistive Christmas-tree lights (connected to a d.c. power supply, for simplicity) and allowing for there being parasitic capacitance across each resistor. At any one moment, charge is more likely to jump down a step than to jump up a step, thereby changing the voltage of that node via the usual Q=C.V formula. Statistical mechanics should then show that the most stable arrangement is for the charges to be evenly distributed, leading to equal voltage drops across each resistor, thereby confirming the usual potentiometer voltages at each node, and the implication therfore that V=I.R.
Since the second law of thermodynamics features so heavily in the above discussion, it is instructive to explore the implications of what would happen if it were possible to have a house inside which entropy and time run backwards. People could place things just inside the entrance (broken teacups, decaying bunches of flowers) and pick them up the next day fully restored. From our perspective, the house would need a power supply. Meanwhile, the neuron machines of the occupant of the house would be thinking forwards from the moment of taking in the restored objects, then watching them decay, to the moment of putting them out at the door. Looking out of the window, it would appear to him that it was for the rest of the universe that time was running backwards, and necessarily connected to a power supply. Indeed, it would appear to be his house that is supplying at least some of that energy, via the power-supply connection that we outsiders think is supplying his house. Plus, he would see his energy source as being the heat coming in through his roof.
The breaking teacup, rotting bunch of flowers, and clockwork Turing machine executing BubbleSort on a list of integers, are all convergent processes, but generating heat (whose dissipation is a divergent process); reversing the former can be framed as a different convergent process, but the reverse of the latter cannot be framed as a divergent process. To the occupant of the house, the power source is the heating coming in through the roof, and the dissipation is out through the mains cable, so convergent. Perhaps we could get round this simply by tracing that low entropy origin back round, via the energy input through the roof, to the CMB or Big Bang of the same external universe as we use. After all, we do not perceive our heat dissipation as powering the rest of the universe.
Inside a living cell, the cell membrane stands for the walls and roof of the house, and the DNA and protein computing engines stand for the occupant of the house. Any damage to these is repaired by that machinery, keeping the entropy low, at the expense of having to take in low entropy nutrients through the cell membrane, and to expel high entropy waste products. To those proteins, it feels like the clocks are constantly being set back. Like the reseting of a water clock, or a graduated candle, the unwanted DNA computation is constantly being re-initialised. This sounds like it could be the reversal process that is being sought in the steam locomotive Turing machine analogy. Except, for that to work, it would need to be a reversed computation period, not simply a reinialised computation event. We could ask what would happen if we occasionally stop the computer, mid execution, and force it to run backwards for a while, before resuming its forward run from that new point. The forward running computer is on a convergent process (there is only one correct list of those integers arranged in ascending order) so the backward process is divergent (there are many many ways to unsort, or jumble, the list). In art, this manifests as the creation of tension and resolution. In Western music, motifs of notes arranged in an ascending phrase followed by motifs of notes arranged in a correspondingly descending phrase. It seems to mimic the tone of the human voice expressing a question, and then contemplatng the consequences as a response. It also seems to echo the need for a circle of dancers to do an initial routine of steps, followed by their reversal, to bring everyone back to their initial positions, before finally moving on to the next stage of the dance. Just as Jane Austen's Pride and Prejudice, or Mozart's Turkish March, or a dance round a Maypole, are abstract concepts, compared to the details of their content, so perhaps a forward and backward running computer program might carve out a more abstract object in some design space.
Ths section has considered utility of representing probabilities in terms of half-life. Through the generalised notions of Kirchhoff's voltage (symmetry) and current (conservation) laws, through latency and throughput of pipelines, and the devices of Shannon, Carnot and Turing, this ties in with the Laws of cooling and capacitor discharge. We can even use this to attach meanings to values of half-life that are negative, imaginary, or mapped out in static space.
Comparative control experiments, and a generalised parallax effect, leads to a consideration of forced reversibility. The selective (symmetry-breaking) properties of resonnance, filters, and generalised standing-waves are manifest in the various types of punctuated evolution.
Communication in time is memory, but in space is just communication, with a characteristic velocity, v, equal to dx/dt. With a servo-motor, that v can be suddenly brought to zero, or reversed, to put the read-head at the chosen position. With processing, the power of the computer, or work done per second, is measured in instructions per second (akin to distance per second) times the force of the average instruction (the language level of the instruction set). Like the servo-motor, though conventionally run at top speed in the forward (convergent) direction, it can in principle suddenly be brought to zero, or reversed in the backward (divergent) direction. The convergent direction is used to reduce raw data down to headline summaries on PowerPoint slides, or a list of integers into a sorted list (theoretically, a reversible process, since no integers are destroyed), or lumps of radioactive metal into decay products. (Nuclear reactions are easier to engineer to run backwards than chemical reactions, since it is not necessary to collect all the ingredients back again, where antiparticles can simply be emitted instead. However, it does still involve the convergent process of collecting in dissipitated heat, and concentrating it back in the atomic nuclei.)
Biological tissue is composed of heat-engines, lifting loads from high-entropy points to low-entropy points. A biological organ is composed of multiple tissues, each building a separate pile. A biological organism is composed of multiple organs, each building a separate structure (along the lines of Deacon's teleodynamic systems).
Communications | Memory | Processing | |
---|---|---|---|
Implemented by Communications | Communications | Since relativity and TD2 ensure that spacial communications is also a transfer in time, a delay-line loop can implement memory | Cannot be done, since passive filters, alone, cannot make an active one |
Implemented by Memory | The information is first stored within a physical device (a sheet of paper, a memory chip or floppy disk) and then is carried to its destination | Only dynamic memory is pure memory (capacitive DRAM, EEPROM, Flash, inductive core or bubble); the others are but engineering emulations | Cannot be done, since passive filters, alone, cannot make an active one |
Implemented by Processing | The side-effect of getting information from A to B, provided that it is configured as a passive filter, not changing the data that is passing through (the semantic input is set to 1 or ‘Id’) | Since forward and backward computation both have positive execution times, a level-restoring double NOR-gate loop, with parasitic capacitance, can implement memory | Processing |
This chapter has considered the implications of cycles within cycles, and the interplay between convergence and divergence. This leads on to half-life and laws of cooling and capacitor discharge, and memory. At the core of all of the models of program execution is the notion that the hardware must include provisions for communication, memory and processing. The workings of the universe seem to be constrained to a number of limitation principles, as explored in the next chapter. Alternatively, the reader is referred back to the table of contents.