Looking for workable definitions of some familiar terms
Following on from the previous chapter,
and the chapter before that,
we can ask what work we might expect the next MC to do.
If we assume that it might have something to do with artificial consciousness,
we first need to establish what is natural consciousness.
Moreover, this term is not alone,
and there are several related terms that we talk about freely in conversation,
without having a formal definition of what they are:
we have only a vague feel for what constitutes them.
These include the distinction between life and non-life;
and the distinction between free-will
and non-free-will (either deterministic or totally random).
Life
We start from the observation that ours is a universe that started thermodynamically far from equilbrium,
and continues to be so.
Energy is forever flowing from the regions of abundance to the regions of dearth.
That flow inevitably creates temporary structures on its way.
This chapter consequently discusses canal side-pans,
and the emergence of structure.
There is a suggestion that what we perceive by the notion of Will is
the ability of one out-of-equilibrium system to push another to a new out-of-equilibrium state.
At the end of the chapter,
the question arises as to whether the rate of build up of new structure
can be quantified,
and is what many propose as a fourth law of thermodynamics.
Emergence of structure under the second law of thermodynamics
If the second law of thermodynamics is likened to a canal system,
then the proposed fourth law of thermodynamics is like one with side-pans.
If the side-pan has infinite surface area, it can be infinitesimally shallow,
set at any height (for the same reason that the read-out of a water clock
can be equivalently on the upper or the lower reservoir,
but normally set at the half-way level for the canal lock).
However, it needs to be deeper (both upwards and downwards) near the lock itself,
tailing off exponentially (in the simplest cases, quadratically) further away
(the spatial equivalent of half-life),
to allow for the finite speed of water waves entering or leaving the lock.
Changing the material in this thought-experiment,
the limit becomes related to the speed of sound in that material;
changing it again, to an electron fluid in a parallel-plate capacitor,
and the limit becomes related to the speed of light scaled by the constants of the dielectric.
Thus, even in the absence of convergent forces, memory behaviour is inevitable:
as soon as a system consists of more than one component
(such as atoms in a crystal, or stars in a galaxy)
any probing (such as by a hot body) of the state of the whole system
will obtain an almost immediate result from the nearest component,
but the time-delayed result from the components further away.
As soon as there is latency, there is memory.
The ultimate latency in the universe is the one attributed to the speed of causality, c.
Delay-line memory is inevitable whenever communication is attempted in space,
as astronomers routinely note.
If only one medium is involved, it can be analysed like the side-pans on a canal system;
if a second medium is involved (such as sand being moved around by a water-current)
it can be analysed through the work that is being done on the second medium;
beyond that, the calculation is more complicated.
Looking for a possible definition for life
There are many proposed definitions for what distinguishes life from non-life,
many of which attach importance to genetics and inheritance.
However, inheritance is just one particularly effective mechanism
for implementing the convergent processes of memory.
However, even without these, self-sustaining structures are formed, standing-waves in the design space,
such as Bénard cells (NS, 05-Oct-2002, p30)
and the braid plains that form
when a head of water is discharged down a sandy beach (NS, 02-Sep-2000, p97).
Where there is convergence, the patterns become established and stable,
including examples like
valleys and waterfalls (NS, 23-Mar-2019, p20),
planetary weather systems (NS, 06-Oct-2001, p38),
catalysed formation of planets (NS, 23-Mar-2019, p15),
auto-catalytic chemical reactions (NS, 21-Jan-2012, p32).
This leads on to protein-based life (NS, 09-Jun-2001, p32),
convergent evolution (NS, 21-Jan-2012, p35),
and stable ecosystems, and in the extreme,
controversially encapsulated in the Gaia hypothesis (NS, 23-Mar-2019, p34).
The second law of thermodynamics is often summarised as the tendency of energy to flow spontaneously
from hotter regions of an out-of-equilibrium systems to the cooler regions.
In its rush to get the energy flowing, temporary structure is often created.
Water flowing down a sandy beach as the tide ebbs out, will carve out gullies,
and heap sand up in meanders, deltas and braid plains.
Even a beaker of water held over a Bunsen flame starts by doing this with thermal conduction,
until symmetry is broken enough for convections currents to set up,
and to start transferring the energy several orders of magnitude more effectively.
Once symmetry is broken, and the convection currents are set up over here in these places and this direction,
rather than over there or in the other direction, the structure is quite stable.
Any attempt to perturb it results in it returning to the established structure;
rivers are, after all, just convection currents, too,
and noteably, rivers create valleys, while valleys gather water to create rivers,
so demonstrate this feedback mechanism of establishing stability to the structure.
Which is where the argument about our treatment of rivers as an identifiable object comes in,
despite their never consisting of the same set of water molecules from one second to the next.
This seems to be where we start to think of rivers as having a self.
It is all in the humen mind, of course, since rivers do not think.
But it is just a continuum (albeit a punctuated one)
from there to patterns in the shale, to single-celled organisms,
to invertabrates, to fish, to amphibians, to reptiles, to mammals (including cats and humans).
Meanwhile, of course, we are all part of the universe;
my water molecules are only on loan,
and will be exchanged with others in a flux just as continuous as that of the river.
However, there is more to inheritable reproduction than this.
There is the potential for an exponentially growing population of near-perfect copies of successful designs.
This leads to a digital behaviour,
stabalising on successful genotypes,
that are immune to chaotic switching in individuals.
Human artifacts (long-case clocks, microprocessors) are designed with such immunity, too.
Evolution
Biological evolution features heavily in the list
in the first paragraph in the previous section.
Life in general, and RNA life in particular,
does indeed seem to be easy to get started (NS, 20-Aug-2016; NS, 24-Apr-2010, p6)
early on the newly-formed planet (NS, 25-Sep-2021, p14).
Drake's equation continues to be revised (NS, 03-Oct-2020, p36; NS, 25-May-2013, p6)
complete with revised definitions of
the habitable zone (NS, 29-Aug-2020, p46; NS, 08-Jun-2013, p40)
and abiogenesis zone (NS, 30-Mar-2019, p14).
Even though life predates the emergence of the first cell membranes (NS, 27-Jul-2019, p7)
it is not clear
how three, subsequent, major leaps (structural integrity, metabolism and reproduction)
were made at once (NS, 08-Aug-2020, p34).
Experiments have shown, though, how self-forming vesicles
can act as a rudimentary cell membrane (NS, 14-Aug-2021, p19).
Multicellular organisms did start to emerge, and on multiple occasions
within the past 2.1 billion years (NS, 08-May-2021, p13; NS, 27-Jun-2015, p11),
and cells with multiple nuclei containing different DNA
emerged at least one billion years ago (NS, 05-Jun-2021, p17).
The mitochondrial event (NS, 27-Feb-2021, p23)
was the one that was a major turning point (NS, 12-Jan-2019, p28),
allowing a way for subsequent multicellular organisms to be powered sufficiently,
despite the constraints of the second law of thermodynamics (NS, 23-Jun-2012, p32),
and thereby enabling the pre-Cambrian explosion.
Intelligence comes at an enormous energy cost (NS, 17-Jul-2004, p35),
and the majority of species have evolved to work more efficiently by doing without it,
relying instead on instinctive, hard-wired behaviours.
Extraterrestrial life
It seems that carbon-based life is relatively easy to get started, and quickly too
(as supported by the fossil record on this planet, and suggested by the Miller-Urey experiments).
It seems, too, that intelligent, complex life is relatively easy to evolve from unintellegent complex life
(as supported by the multiple appearances on this planet (NS, 14-May-2022, p42),
in the great apes, dolphins, crows/parrots and octopuses).
The really difficult step appears to be the evolution from simple life to complex multicellular life;
the mitochondial event, or its equivalent,
might have been such a fluke as to make ours the only planet in the universe
to have succeeded past this step
(the half-life of it not happening is very long).
There could be evidence, though, that perhaps such events
of the internalisation of one organism by another (NS, 24-Oct-2020, p28; NS, 10-Mar-2018, p54)
have not been so uncommon in the past, after all.
Another fortuitous contributor to our emergence is the presence of the earth's moon.
It stablises our orbit (making a more stable platform for evolution to work with);
it causes tides of just the right magnitude to wash nutrients into the sea;
it is about 400 times less diameter than the sun, and coincidentally 400 times less distant,
meaning that solar eclipses are particularly dramatic events (total eclipses)
that shocked our ancestors into developing the skills to build
large structures,such as mathematics, and stone circles (NS, 13-Dec-2003, p35).
Even if the emergence of intelligent life has happened multiple times,
each instance can still feel alone in the universe,
where the communication distances are too great,
they exist(ed) at a different time,
or one or other is something basic and not up to the processing necessary to communicate
(communications, memory, and processing).
As two final thoughts (Cohen 2021),
interestingly both from story-tellers working on the same iconic film,
Arthur C. Clark said, "Two possibilities exist:
either we are alone in the Universe or we are not; both are equally terrifying,"
and Stanley Kubrick said, "The most terrifying fact about the Universe
is not that it is hostile but that it is indifferent."
Top of the evolutionary tree
The second law of thermodynamics arises in the human mind,
when we decide to stop tracking components individually,
and only take the bulk, black-box average behaviour.
However, plants and animals are demonstratably able to anticipate the seasons and the diurnal changes of our planet.
Sieves, ratchets, and semi-permeable membranes
are the unthinking devices that do the statistical averaging
(smoothing over the false positives and negatives)
and effectively treat the environment in a statistical, bulked parameter sort of a way
(with no conscious observer required).
It is the interaction of the organism with the environment
that leads to the notion of there being a flow to thermal time (NS, 06-Jul-2019, p32).
Each living cell
manages to achieve a reduction of entropy within the cell membrane,
keeping it all well away from thermodynamic equilibrium,
through repair and growth (and cell division)
albeit at the cost of having to take in low-entropy nutrients from outside,
and to expel high-entropy waste products back out again.
(The downside for any organism being particularly good at doing this is that, ipso facto,
it looks like a source of low-entropy nutrients
for the next organisms up in the food chain.)
Finer grain than this,
convection currents are organised structure that also gets established
(and self-maintained, via feedbck mechanisms) in a body of fluid that is differentially heated.
Multicell organisms do the same within each of the cells,
but also then arrange for the environment of those cells (the organism's body)
to be kept maintained and cleaned, ejecting the waste products even further out.
Carnivores obtain their low-entropy nutrients by usurping the successfully well-maintained bodies of herbivores;
herbivores do it by usurping the successfully well-maintained bodies of plants;
photosynthesing plants do it by taking in high-entropy nutrients (carbon dioxide and water)
and the astounding ability to collect energy from an abundant energy source in the sky,
to produce low entropy sugars
via endothermic reactions that involve the capturing of the energy in sunlight.
So, it is really the plants that are doing all the heavy lifting on our planet.
If, Hofstadter-like,
colonies of insects can be considered to be super-organisms
that have more intellegence than their component parts,
then so too, Attenbrough-like, could networks of plants, or bacteria even
(and ultimately, on to the Giai hypothesis).
Of all the living organisms on this planet,
the clade that has dominated this planet since the demise of the dinosaurs
is the Angiospermae (the flowering plants).
They are so successful that they have enslaved two classes (insects and mammals)
into doing their work for them
(pollination and seed dispersion;
plus, as David Attenborough noted on the role of elephants, forest clearance).
One particular family (the grasses) has focused on enslaving one particular species (Homo sapiens)
to turning over vast swathes of land to its propagation (lawns and parks,
but mainly fields of wheat, maize and rice).
Punctuated evolution
Though each of the laws of cooling operate in a smooth way,
the establishment of a new nesting of layer of stable pattern occurs in an almost quantised way,
as a sudden jump to a higher plateau,
like a node in a standing-wave in some abstract design space.
Abrupt changes in human society include
those of the agricultural, industrial and information revolutions;
in human abstraction from Popper's world-1, to world-2, to world-3,
where world-1 includes the progression from physics, to chemistry, to biology,
and where physics, in turn (Gell-Mann and Hartle (1990), p449),
progresses from wave-function collapse, to decoherence, to resonance,
and the fundamental particles of the standard model,
before leading on to the periodic table and chemistry;
and similarly, in this particular planet's biology has undergone the punctuated evolution
of mammals from reptiles, from amphibians, from fish, from invertabrates,
and from the emergence of the first eukaryotes.
The process of setting up a convection current
involves a breaking of symmetry of a previously astable equilibrium
(with there being no prior preference for location or direction of each convection current).
Thus, the stable structures so formed involve the acquisition of new bits of initial-condition information.
Memory is thereby implemented,
in accordance with Landauer's principle via the second law of thermodynamics.
Where the physical laws lead to funneling (NS, 05-Oct-2002, p30)
and niche-construction (NS, 26-Sep-2020, p44),
convergence on, and establishment of, structure will result,
and be maintained despite perturbing forces.
Any hypothetical universe, chosen at random,
is bound to have pockets of convergence and divergence,
where those of convergence will inevitably win out;
by definition, stable structures persist, and unstable ones do not
(and piles of powder at the nodes of a standing-wave stay in place,
while those at the antinodes do not).
As Coopersmith notes (2017, The Lazy Universe)
rivers create valleys, and valleys gather water to create rivers.
Knocking a system away from a previous equilibrium always seems to lead to it finding its way,
either back to previous equilibrium, or else on to a new one.
The stable structure becomes a platform on which further structures can be built
(and life on board a ship continues as normal).
Collapsing clouds of hydrogen stabilise as stars,
but then start collapsing again once the hydrogen supply is exhausted,
stabilising temporarily as different classes of star on the way.
Planet earth, experiencing climate warming by more than a few degrees,
will surely stabilise on a new set of ecosystems
(though not necessarily one that includes Homo sapiens).
Possible measures for the fourth law of thermodynamics
One question is, how quickly the new equilibrium point is established,
and for any consequent new structures to build up (NS, 29-Oct-2005, p51).
Indeed, it is noted that evolution takes time (NS, 19-Dec-2020, p50)
and to two different time-scales (NS, 18-Jun-2022, p43).
Lloyd (1990) measures the amount of structure created
against the underlying energy flow
(like a sort of Strouhal number for energy flow).
Chaisson suggests using energy flow density,
measure in ergs per gram per second (NS, 21-Jan-2012, p35).
Prigogine et al. (1984) were particularly interested in systems
that were permanently held out of equilibrium,
and Lovelock noted that life is recognisable by
the persistence of such a condition.
Deacon (2012) distinguishes between
simple teleodynamic processes (those that work to maximise energy flow)
and complex morphodynamic processes
(those that sequester away a private energy reservoir for future use).
Tsallis (NS, 27-Aug-2005, p34) proposes a formula
for computing the entropy of an out-of-equilibrium system
that happens to give the correct power-law, pq,
to generate Boltzmann statistics for systems that are close to equilibrium, with q close to 1,
and also seems to work for higher values of q
that have external energy sources and that are far from equilibrium.
One candidate would be the principle of maximum entropy production (NS, 06-Oct-2001, p38).
Another candidate is the Onsager reciprocal relations which,
to quote from Wikipedia, "express the equality of certain ratios
between flows and forces in thermodynamic systems out of equilibrium,
but where a notion of local equilibrium exists".
At the system (organism) level,
measures are proposed based on body-part complexity (NS, 05-Feb-1994, p37).
Crutchfield and Young (1990) propose
yet another a way to measure complexity.
For computing,
the computer's "operations per second" rating is a measure of its speed,
suggesting that the number of operations executed is akin to some sort of measure of distance.
The computer's power is measured by the computing work done in unit time,
where that work is measured as force times distance.
The force sounds as though it might be related to something like Halstead's Language Level
(of a high-level language compiler, or of a low-level language instruction set).
The number of operations required to perform the given task (the distance)
is related to Knuth's order of complexity.
The throughput is related to the power (work per unit time),
while the latency is related to number of operations plus the number of delay elements requried,
such as in Sheeran's systollic arrays.
Concluding remarks
This section has been concerned with
looking for a possible definition for life.
From a generalised version of Kirchhoff's current law,
the emergence of structure under the second law of thermodynamics appears
in the form of measure of latency and throughput of a pipeline,
with delayed flow setting up a generalised type of standing-wave
that we perceive as punctuated evolution,
and quantifiable as a possible fourth law of thermodynamics.
Comparative control experiments, and a generalised parallax effect.
Probabilities expressed as half-life.
Free-will
The points of bifurcation are inevitablely, though not necessarily very usefully,
suggested as a possible entry point for a possible definition of free-will.
This leads to a discussion of
a possible principle of relativity for determinism and free-will.
First, though, we need to consider what constitutes Will, free or otherwise.
Looking for a possible definition for Will
This chapter considers a number of thought-experiments
involving two beings, A and B (each a conscious robot or human, and traditionally called Alice and Bob)
and several objects, X Y and Z (such as small rocks or parts of a spacecraft)
isolated in the depths of space.
When A tries to exercise her will in the combined system, A+B+X+Y+Z,
she notices that she meets with some resistance.
B does not do everything that A wants, but then neither does X (constrained by the laws of physics).
We are apt to declare things (including inanimate objects) as having a will of their own
when they do not stay as we expected or intended them.
Perhaps this gives us the basis of what we mean by Will
(as distinct from free-will):
the controlled (intended) ability to push a target system away from its initial position.
In the system A+X+Y+Z,
A can notice a symmetry between her relationship to each inanimate object (X, Y and Z).
Moreover, the objects can be considered grouped as a single compound object, XYZ,
such as a spacecraft or rubble-pile asteroid,
and become her reference, with respect to which all measurements are taken.
But then, the being, A, also has component parts and limbs (A=h+a+a+t+l+l), and also tools.
Her extended-phenotype blurs between being thought of as external objects, and being part of the being's being.
Indeed, if A exerts her will on an object (a bowling ball, a fellow human being)
by rolling it up to the top of a hill,
to watch how it rolls back down again,
that object becomes part of her extended phenotype.
Thus, we could define that A exerts her will on a long-case clock when she winds up its weights.
But then, does this mean, by extension, that we think of those weights as exerting their will on the pendulum,
to prevent it from slowly gliding to its rest position?
Perhaps the definition of Will is more nuanced than this.
The falling weight part of the long-case clock merely acts to maintain the status quo on the pendulum part,
but perhaps we are more concerned with the ability of an energy flow (from the hot bath to the cold bath)
to build new structure, such as convection currents and braid plains.
The mind of Maxwell's Demon (an information engine) is intent on pushing the system
away from thermodynamic equilibrium.
Indeed, one proposal for a fourth law of thermodynamics
takes the form of a 'principle of increasing complexity'.
Bifurcation and causality
The universe seems to work with analogue functions that gradually build up,
rather than digital ones that switch instantaneously between representing a zero or a one.
A cause (like closing an electrical switch, or pushing on a brass lever in a Babbage-like machine)
must lead to a gradual, smoothly ramping-up effect, as the electrons and atoms accelerate.
Before and after a collision
(such as a white snooker ball hitting a red one, or the two plates of a clutch in a car engine coming together)
the components follow the usual laws of motion.
Just as nature abhors a vacuum, so it abhors any infinitely abrupt changes (discontinuities)
during the collision itself.
Moving the window of interest so that it straddles the moment of the collision, t0,
there must be a smooth and continuous transistion
in all of the infinite set of derivatives
(x(t), dx/dt, d2x/dt2,
d3x/dt3, d4x/dt4,
and so on indefinitely up to the nth order differential).
The failure of the rigid body approximation can account for this,
through the speed of sound limitation within the material of the snooker ball
(which cannot exceed 36 km/s (NS, 17-Oct-2020, p10))
or the flexing and hysteresis of the mechanical linkage of the components of the car.
In a particle collider,
the t0 event can be blurred
through Heisenberg's uncertainty principle and quantum tunneling.
However, it is also possible for quantum behaviour to display abrupt changes,
provided these get smoothed out by statistical blurring of their combined occurences.
Points of bifurcation are the seemless branch points
in the current trajectory of the system,
where either one path or the other could have been chosen.
Flat-functions (non-analytic smooth functions)
such as {IF(t>0 AND ClutchEngaged): exp(-1/t); 0}
and Friedrichs mollifiers are one way of modelling this smoothed-out transition,
with a piecewise-continuous approach to handle the point of bifurcation.
These correctly capture the constraint that
no part of the transition, even a smooth and continuous one, can start before t0,
since the effects cannot anticipate the cause:
it has no Taylor expansion.
Curiously, though, in the case of the car clutch example,
it is the deterministic system (the mechanics of the car)
that has the flat-function properties attributed to it,
not the system that supposedly has the overriding will (the human driver).
So, all flat-functions have succeeded in doing is indicating where the problem lies.
It is a bit like putting down marker flags in a mine field:
no explosives have been defused yet,
but at least we have pinpointed where they are located.
Indeed, an agent with overriding will is not in fact relevent here;
even in a driverless car,
the components on both sides of the clutch do not know, in advance,
that the car is about to pull away.
Similarly for a snooker ball about to be hit
(and equally for the one that is about to do the hitting);
at the moment of collision, our analytical tools consider them to be a single system,
and their respective states to continue to be an evolution from that.
This is a manifestation of the quantum measurement problem,
and applies all the way up to the classical level.
If there is a cause-and-effect event
that occurs in the brain as a result of an expression of free-will,
it must result in a blurred, smoothed-out change.
The equivalent of interplanetary transport networks (NS, 25-Mar-2006, p32)
steering between Lagrange points (NS, 21-Feb-2009)
can be visualised, too, for electrons teatering between taking one path or another,
or forced system's point of chaotic balance
(divergent, and acquiring an extra bit of initial-condition information)
to change (or not) the bit stored in the memory cell,
or to take the conditional branch (or not) in a computer program.
Momentarily, the system is left in the balance,
until the hardware convergently kicks in further to sway the decision.
Similarly for a pencil that is momentarily balanced on its point,
being nudged from one path to another, in a seemless way, by a system
that an outside observer might otherwise have labelled as displaying chaotic behaviour.
Some wonder whether this could be the delicate, highly sensitive,
places where some otherwise external, ephemeral effect could exert an influence;
however, this would merely move the problem,
and so is not particularly useful.
Meanwhile, with Nahmias' development of Libet's experiments from the 1980s (NS, 27-Sep-2014, p11),
we instinctively think of it as being some sort of instantaneous decision-making process:
either I have decided to move my wrist, or I have not.
But this cannot be the case.
Complex ideas, including decisions as to whether to go for the delayed gratification of two mashmallows,
must involve such large expanses of neural network
that even electromagnetic radiation takes a finite time to cross, let alone neural signals.
Looking for a possible definition for Free-will
In the human-centric view,
the pinnacle of the process of self-organising complex structures
is the evolution of those that led to the emergence of consciousness.
Perhaps related to the hard problem (NS, 02-Apr-2022, p38; NS, 29-Jan-2022, p48),
it is not clear what it is that gives our feeling of having free-will (NS, 21-May-2016, p32)
with the possible implication that it might be connected
with what gives us a feeling of the flow of time (NS, 04-Feb-2017, p31)
and, although none of the quantum mechanics interpretations capture it (NS, 02-Nov-2013, p34)
it is presumably connected to the laws of thermodynamics.
Meanwhile, for free-will, science can draw on observational data,
from many thinkers, even in metaphysics, over the millennia,
but is forever seeking a way to piece those observations together.
Like consciousness, though,
it is always the subject, never the object,
and so does not surcumb to investigation under scientific method,
which has as a central principle the need for objective observation.
'Free-will' could be the name given when each entity is a component of an encompassing society
whose persistence depends on the individual 'I will's having to give way, when appropriate,
to a top-down 'thou shalt' (for the communal good)
thereby leading to a view in which the individual is not the level of paramount importance.
Even a society that is purely deterministic
could set up a judicial system to eliminate wayward elements,
as part of its persistence mechanism.
An individual convection current does not ask to be brought into existence,
but now that it does exist, along with convergent feedback mechanisms to maintain its existence,
it is not surprising that a notion of self emerges,
along with a protective attitude to its continuation and progression.
Events can either be caused (by previous events) or be spontaneous (random, uncorrelated)
and our intuitive notion of free-will cannot be the result of either (NS, 12-Jun-2021, p28).
Free-will (NS, 03-Sep-2016, p35) cannot be the opposite of deterministic behaviour,
but a rich, chaotic behaviour on the boundary
between deterministic behaviour and spontaneously random behaviour (NS, 18-Apr-2020, p25).
The internal mechanisms, of the thing that reputedly has the free-will,
evaluate the various alternatives,
with a view to choosing one,
and to execute the actions to deliver the outcomes associated with that choice.
Free-will could be just the name that we give to behaviour
that is dominated by internal interactions (internal causes) within the system (NS, 06-Apr-2019, p34)
measurable using Tononi's integrated information theory.
Between changes of speed or direction,
all of the actions of the components of a clock
(or, more markedly, of a car with a clutch that is about to be engaged)
are smooth, continuous, and deterministic,
following the usual laws of motion.
Likewise, for a human mind,
immersion is the feeling that the tool
(the blindman's cane, the tennis pro's racquet, the driver's car,
the organism's own body parts)
has become part of the extended self (NS, 12-Dec-2020, p42).
Maybe this kicks in
when the expressions of free-will, via piecewise discontinuities,
are imposed on the components so frequently
that the periods of continuous behaviour are vanishingly short;
and a view that would not be inconsistent with Libet's results (NS, 11-Aug-2012, p10).
Principle of relativity for determinism and free-will
The more deterministic the hardware of the universe is (Den83, NewSc)
the more able are its organisms to choose how to avoid the inevitable
(thereby making it non-inevitable, after all).
Panpsychism proposes that consciousness lies perhaps on
a continuum (NS, 20-Nov-2021, p42; NS, 02-May-2020, p40)
with the human mind defined to be unity,
and rocks and subatomic particles down close to zero, but not quite zero (NS, 02-Apr-2022, p38).
Since the hardware of the universe is not quite perfectly deterministic
(but is, even so, much closer to it than it is to being perfectly random)
then the act of copying will never be 100% accurate,
and evolution will inevitably arise.
Maybe there is a principle of relativity that can be applied here
(if free-will and physical laws are as indistinguishable
as gravity and acceleration are to an observer in a windowless spaceship (Sch79)).
And freedom, at least, is indeed relative,
as an affluent first-world voter ought to admit to a starving third-world peasant.
In the experiment earlier this chapter,
it would be harder for A to be a solipsist, than in an experiment (2) with (A+B) alone,
where each can consider the other as an object, like (A+X),
since A observes that B appears to exhibit freedom relative to X, Y and Z.
But X, Y and Z are also constrained by laws of physics.
free-will and physical laws are as hard to tell apart as acceleration and gravity
(syntactic information and semantics?).
It would be easy for A to be animist or even pantheist,
The being (A) all alone, has no way of knowing that she has any freedom.
But if, even in REM sleep, the being 'thinks, and therefore is',
without needing to look outside the carriage,
she knows and feels the force of her own free-will.
Moreover, always the subject, never the object,
consciousness is not observable from outside the carriage, looking in.
But free-will, of the objective type, is indeterminable.
There is no 'outside of the carriage' from which to look.
Even another being (B) cannot tell for her, since he is in the same carriage.
Consciousness in others is assumed by extension: 'the problem of other minds'.
Concluding remarks
This section has been concerned with
looking for a possible definition for Free-will
starting more modestly with looking for a possible definition for Will.
A generalised form of Kirchhoff's current law measures the
latency within a pipeline, of the form envisaged by Shannon,
but with points of bifurcation with implications on causality.
Comparative control experiments, and a generalised parallax effect
lead to a possible principle of relativity for determinism and free-will./p>
Probabilities expressed as half-life.
The selective (symmetry-breaking) properties of
resonnance, filters, and generalised standing-waves.
Consciousness
Consciousness is perhaps just an impression
that the subconscious brain concocts
to give it a survival advantage (NS, 15-Aug-2015, p26; NS, 07-Jul-2007, p36)
as a sort of cognative prosthesis (NS, 07-Sep-2013, p28).
This would be compatable with Libet's results (NS, 11-Aug-2012, p10)
and might also explain the tendency of the mind to entertain notions of pantheism,
and post-event rationalisation.
Rather than our brains analysing incoming signals,
finding patterns of ever-increasing complexity,
and making sense of them by matching them against the internal representations,
it is the other way round (NS, 08-Jun-2019, p38; NS, 09-Apr-2016, p42, and also p20):
our brains generate the anticipated sensory data to match the incoming signals,
using internal models of the world (and body),
thereby giving rise to multiple hypotheses,
with the most probable one becoming tagged, Dennett-like,
within the 'distributed self',
as the one that will be considered to be our perception,
using a type of Bayesian analysis (NS, 31-May-2008, p30).
Dreams, hallucinations and tinnitus can therefore be considered to be
signs of a correctly working brain
in the absence of sufficient sensory input (NS, 05-Nov-2016, p28).
Such Bayesian-updating (NS, 26-Sep-2020, p40)
also has parallels to the scientific method (coming up with models to explain the observed data,
actively setting out to observe new data,
and keeping the model no more complicated than necessary)
being just the way that the human mind has been working, naturally, anyway.
Each hypothesis can then be refined in the light of the error signals that are generated
using a process of 'prediction-error minimisation' (NS, 04-Sep-2021, p44).
The mere repeated occurrence of this process
might also be what gives the brain a continuous assurance of its identity (NS, 03-Sep-2016, p33).
Consciousness might be just a shortcut
that has evolved for handling data compression (NS, 25-Nov-2017, p44)
or a model that the brain maintains of its own operation (NS, 21-Sep-2019, p34)
as the control system for the body.
The brain is an organ for providing central control to the rest of the body;
to do this, it needs to maintain an internal model of each of those body parts (such as where they are in space).
These internal models are what we call phantom limbs: we all have them,
and they are normal, but it is only amputees that have their attention drawn to them (21-Sep-2019, p34).
The brain needs to control itself, too
(to optimise resource allocation, via some sort of focus of attention)
so consciousness could be the brain's internal representation in this (10-Jul-2021, p34).
This would go a long way to explaining the so-called 'hard problem'
since the brain would be ascribing the feelings of red, or bitter, or pain, or happiness, to this internal model.
It might also explain why we find the homunculus idea so appealing in all our thinking of brain operation.
This also argues against the possibility of having consciousness
in a disembodied brain (NS, 27-Jun-2020, p28).
In the context of human dialogue,
suppose that Bob has just read, in a discussion about antiques, something about Tunbridge-ware,
and asks Alice if she knows what that is.
Alice replies, "Oh yes, it is very similar to Mauchline-ware".
Alice sounds like a knowledgeable expert on these types of antiques. Or is she?
Maybe she is only rote repeating a simple line of text that she has happened to read somewhere;
like a chatbot, or the Eliza program of the 1960s,
or indeed Higgins' first attempt at presenting Eliza, in Pygmalion.
To convince us that Alice really does mean the words she is saying,
we might expect, at the very least, her to be visualising, in her mind,
examples of objects in the two classes of ware.
Until recently, chatbots fail such a requirement,
since none could do more than reason verbally about the subjects of their conversations.
By the same argument,
humans can reason in language, visualisation, musical phrases, movement and dance;
but the output from radio or X-ray telescopes, or electron microscopes, or particle colliders, or sonar receivers,
are each ungrounded in direct experience.
The last of these ties in to the discussion of what it is like to be a bat,
and thence on to Dennett's multiple-drafts model of the human mind,
and Tononi's integrated information theory.
(Incidentally, Alice might not be completely correct in her comparison, but perhaps that is not the point.)
Then there are Boltzman brains (NS, 18-Aug-2007, p26; NS, 28-Apr-2007, p33).
But we are talking about whether the development of disembodied brains is possible,
not the spontaneous appearance of them.
So they do not really relate to this discussion;
and luckily, Sean Carroll has arguments against their possibility, anyway (NS, 18-Feb-2017, p9).
|
In the The Bottle Experiment,
one (or perhaps many) Z80 processor(s) would be wired up on a small circuit board with a memory chip,
and an input from the outside world that would inject noise, at a very low level, onto the data bus.
This noise would make blank instructions coming from the empty memory chip, look like non-blank instructions.
Gradually, the memory would fill up with noise,
but with stable patterns establishing themselves
(by definition, stable patterns persist, while unstable ones die away quickly).
So, Tierra like, it could be left running day and night,
for months on end, to see what patterns evolved in the memory.
As to what input to connect to the data bus,
one thought was to feed it with articles on selected newsgroups of Usenet, such as comp.ai.philosophy.
|
It is uncertain as to whether anything interesting would indeed evolve, and to what (self-assigned) purpose.
There are many layers of supporting information missing between the low-level AI's workings,
and the surface-layer patterns in tweets and postings.
That was one of the major lessons learned on the Cyc project of the 1990s.
The aim was to connect an AI up to an encyclopedia, and see what emerged.
One early stumbling block was finding that a large part of the knowledge about the world, common-sense knowledge,
was not normally found in an encyclopedia
(such as, "If the president is in the Oval Office, where is his left leg likely to be?")
Also, a recent paper finds that AI models collapse if they overdose on their own input:
"In one example, a model started with a text about European architecture in the Middle Ages and ended up,
in the ninth generation, spouting nonsense about jackrabbits."
By this token, therefore, human consciousness is self supporting:
it focuses on what it perceives as being important, but what is important it focuses on.
Pattern emerging at a certain level of granularity, and not at others, by natural means.
The ideal of a disembodied consciousness has parallels with thermodynamics, and the ideal case of a closed system.
A perfectly closed system cannot exist, since we cannot look inside it to observe it (information cannot leak out,
and the energy of our probe cannot enter in).
Similarly, a perfectly reversible Carnot engine, even if it could exist,
can only work if it takes infinitely long to complete its cycle.
Even so, the concept of the closed system is an important ideal,
on which the theories of thermodynamics can be built.
This also hints a parallels with Turing decideability, in the realm of computing engines.
The concept of dis-embodied consciousness could still end up
as the corner-stone of the theory behind a physically grounded consciousness.
Taking the third-person view
For a passenger in a wind-up clockwork car,
the mass of the car would be decreasing as the energy of the spring is used up,
seemingly powering the rest of the world to whizz by,
but the human mind does not default to this perspective,
but tries to imagine a third-person, objective view of the car viewed from outside.
Generating new ideas
When a person dies, we can count up the mass that they leave behind,
along with any charge and spin.
Each is just a simple number.
What about syntactic information, and semantic information?
Affinity within the extended phenotype (I think of my arms and legs as being part of me, and my home and tools).
They have been acquired gradually through the person's life-time,
sorting and filtering what to gather and what to discard.
Brainstorming might involve word-association, for syntactic information,
and idea-association for semantic information
(including by simile, metaphor, analogy, model, and allegory)
followed up by curation.
The model of thinking,
including deductive logic,
that we have maintained since the ancient Greeks,
might need to be superseded (NS, 27-Feb-2016, p34).
There is even a suggestion that the scientific method might be too stringent,
and that we might need to entertain theories
that will always be beyond experimental testing (NS, 27-Feb-2016, p38).
Indeed, there are already problems with replicating published experiments,
and the associated difficulty publishing papers that report negative results (NS, 09-Apr-2022, p45).
Moreover, it is noted that scientists actually spend most of their time
building up the weight of confirming evidence,
rather than applying scientific method,
looking for contradictorary cases (NS, 10-May-2008, p44).
What do we expect of the next MC?
The overall aim remains that of building our next tools for us to use in our constant battle against
the second law of thermodynamics.
It is to be assumed that nature, via evolution, will already have trodden this path.
If what we seek is a new machine class,
to be the enabling technology of the next industrial revolution,
then perhaps it is one that manifests artificial consciousness.
If so, we need first to be sure of what we mean by natural consciousness,
and will
and free-will.
Kaku (2011) notes, albeit somewhat journalistically, that emotions are necessary in an AI,
for it to be able to rank the choices during its decision-making.
He also defines consciousness as the ability to sense and recognise the environment (the hard-problem),
to sense and recognise itself (self-awareness),
and to simulate the future (to plot strategy).
The last of these means that such machines would be able to anticipate potential problems
that we might not have been aware of,
given that their simulations of the future consequences
would be more accurate than ours.
In the light of all this, the next chapter considers how the next MC might be built.
Alternatively, the reader is referred back to the table of contents.
|