VAR logo English
Home page Services Past achievements Contact Site
map
Page d'accueil Services Réalisations précédentes Contact
Français


Looking for workable definitions of familiar terms

Following on from the previous chapter the overall aim remains that of building our next tools for use in our constant battle against the second law of thermodynamics. It is to be assumed that nature, via evolution, will already have trodden this path.

There are many projects to choose from, that could possibly form the start of the newt revolutionary device, but most of them, despite the revolutionary claims by their backers, are really just cases of much more of the same. Of all the potential candidates, Artificial Consciousness, perhaps, stands out as being the most significant change that researchers are attempting to bring about in future machines. Man is capable of conscious thought, despite being built on biochemical machinery. So, if man is capable of conscious thought, so ought other machines, if constructed in the right way.

Before we can build machines with artificial consciousness, we need to establish what is natural consciousness. However, this term is not alone, and there are several that we talk about freely in conversation, without having an formal definition of what they are: we have only a vague feel for what constitutes them. These include the distinction between life and non-life; and the distinction between free-will and non-free-will.

Life

We start from the observation that ours is a universe that started, and continues to be, thermodynamically far from equilbrium. Energy is forever flowing from the regions of abundance to the regions of dearth. That flow inevitably creates temporary structures on its way. This chapter, therefore, discusses side-pans, and the emergence of structure. There is a suggestion that what we perceive by the notion of Will is the ability of one out-of-equilibrium system to push another to a new out-of-equilibrium state. At the end of the chapter, the question arises as to whether the rate of build up of new structure can be quantified, and is what many propose as a fourth law of thermodynamics.

Emergence of structure under the second law of thermodynamics

The throughput has a mean value, with any fluctuations appearing as the standard deviation. A pipeline in which a constant throughput has been established, with a constant latency between input and output, can be represented in a Shannon "T" diagram, with the throughput from the input to the output, and the latency able to superimpose extra fluctuations at the output. Varying the throughput at the input, while keeping the latency constant, the throughput varies at the output after the latency time has elapsed. Varying the latency while keeping the throughput at the input constant, the throughput at the output will vary as the surplus material goes in to, or out of, storage.

T-digram

If only one medium is involved, it can be analysed like the side-pans on a canal system; if a second medium is involved (such as sand being moved around by a water-current) it can be analysed through the work that is being done on the second medium; beyond that, the calculation is more complicated. If the second law of thermodynamics is likened to a canal system, then proposed fourth law of thermodynamics is like one with side-pans. If the side-pan has infinite surface area, it can be infinitesimally shallow, set at any height (such as the half-way level for the canal lock) but needs to be deeper (both upwards and downwards) near the lock itself, tailing off exponentially further away, to allow for the finite speed of water waves entering or leaving the lock. Changing the material, the limit becomes related to the speed of sound in that material; changing it again, to an electron fluid in a parallel-plate capacitor, and the limit becomes related to the speed of light scaled by the constants of the dielectric. Thus, even in the absence of convergent forces, memory behaviour is inevitable: as soon as a system consists of more than one component (such as atoms in a crystal, or stars in a galaxy) any probing (such as by a hot body) of the state of the whole system will obtain an almost immediate result from the nearest component, but the time-delayed result from the components further away. As soon as there is latency, there is memory. The ultimate latency in the universe is the one attributed to the speed of causality, c. Delay-line memory is inevitable whenever communication is attempted in space, as astronomers note.

Looking for a possible definition for life

There are many proposed definitions for what distinguishes life from non-life, many of which attach importance to genetics and inheritance. However, inheritance is just one particularly effective mechanism for implementing the convergent processes of memory. Self-sustaining structures are formed, such as Bénard cells (NS, 05-Oct-2002, p30) and the braid plains that form when a head of water is discharged down a sandy beach (NS, 02-Sep-2000, p97). Where there is convergence, the patterns become established and stable, including examples like valleys and waterfalls (NS, 23-Mar-2019, p20), planetary weather systems (NS, 06-Oct-2001, p38), catalysed formation of planets (NS, 23-Mar-2019, p15), auto-catalytic chemical reactions (NS, 21-Jan-2012, p32). This leads on to protein-based life (NS, 09-Jun-2001, p32), convergent evolution (NS, 21-Jan-2012, p35), and stable ecosystems, and in the extreme, controversially encapsulated in the Gaia hypothesis (NS, 23-Mar-2019, p34).

On heating one side (usually the bottom) of a beaker of water, according to the second law of thermodynamics, energy (heat) will spontaneously transfer from the hotter regions, to the cooler regions. At first this will be by conduction, with cooler water molecules blocking the attempted passage of warmer ones, until the symmetry is broken, and a one-way system sets itself up, in a pattern of convection currents. So, the structure is being established precisely by the second law of thermodynamics trying to speed the flow of energy from the hotter parts to the cooler parts. 'Life' could be just the name that we give when persistence is achieved via a survival of the fittest mechanism; the surrounding energy flows converge on persistence of the structures (NS, 18-Mar-2017, p11). This puts living things on the same spectrum as stable convection currents. Viewed in this way, DNA-based life in general, and gas-guzzling human culture in particular, is just the latest layer of convection current to have established itself on one particular atmosphere-bearing rocky planet that is being heated on one side by its star.

Evolution

The monotonic arrow of the second law of thermodynamics can, in fact, be of either type, increasing or decreasing, as in the water clock whose read-out can either be on the top reservoir or on the bottom one. Dirty things left out in a place where the dirtiness is not being replaced will gradually become clean. Any monotonic function can be looked on as a resource, capable of driving a Crooks-like engine. Similarly, unstructured things can start to become structured, as in a well-formed computer program converging, perhaps by divide-and-conquer, on its solution, and as in Maynard Smith's observation on the evolution of complexity (NS, 05-Feb-1994, p37): so, the second law of thermodynamics can give rise to localised order, as well as to generalised disorder.

The process of evolution features heavily in the list in the first paragraph in this section. Life in general, and RNA life in particular, does indeed seem to be easy to get started (NS, 20-Aug-2016; NS, 24-Apr-2010, p6) early on the newly-formed planet (NS, 25-Sep-2021, p14). Drake's equation continues to be revised (NS, 03-Oct-2020, p36; NS, 25-May-2013, p6) complete with revised definitions of the habitable zone (NS, 29-Aug-2020, p46; NS, 08-Jun-2013, p40) and abiogenesis zone (NS, 30-Mar-2019, p14). Even though life predates the emergence of the first cell membranes (NS, 27-Jul-2019, p7) it is not clear how three, subsequent, major leaps (structural integrity, metabolism and reproduction) were made at once (NS, 08-Aug-2020, p34). Experiments have shown, though, how self-forming vesicles can act as a rudimentary cell membrane (NS, 14-Aug-2021, p19). Multicellular organisms did start to emerge, and on multiple occasions within the past 2.1 billion years (NS, 08-May-2021, p13; NS, 27-Jun-2015, p11), and cells with multiple nuclei containing different DNA emerged at least one billion years ago (NS, 05-Jun-2021, p17) and notably through the internalisation of one organism by another (NS, 24-Oct-2020, p28; NS, 10-Mar-2018, p54). However, it is the mitochondrial event (NS, 27-Feb-2021, p23) that was the one that was a game changer (NS, 12-Jan-2019, p28), allowing a way for subsequent multicellular organisms to be powered sufficiently, despite the constraints of the second law of thermodynamics (NS, 23-Jun-2012, p32), and thereby enabling the pre-Cambrian explosion. Intelligence comes at an enormous energy cost (NS, 17-Jul-2004, p35), and the majority of species have evolved to work more efficiently by doing without it, relying instead on instinctive, hard-wired behaviour. Even so, the evolution of intelligence appears to have been fairly easy, with its independent emergence on this planet several times (NS, 14-May-2022, p42).

It seems that carbon-based life is relatively easy to get started, and quickly too (as supported by the fossil record on this planet, and suggested by the Miller-Urey experiment). It seems, too, that Intellegent complex life is relatively easy to evolve from unintellegent complex life (as supported by the multiple appearances on this planet, in the great apes, dolphins, crows/parrots and octopuses). The really difficult step appears to be the evolution from simple life to complex multicellular life; the mitochondial event, or its equivalent, might have been such a fluke as to make ours the only planet in the universe to have succeeded past this step. As an extra thought, even if it has happened multiple times, each instance can still feel alone in the universe, where the communication distances are too great, they exist(ed) at a different time, or one or other is something basic, and not up to the processing necessary to communicate (communications, memory, and processing).

Top of the evolutionary tree

The second law of thermodynamics arises in the human mind, where we decide to stop tracking components individually, and only take the bulk, black-box average behaviour. However, plants and animals are demonstratably able to anticipate the seasons and the diurnal changes of our planet. Sieves, ratchets, and semi-permeable membranes are the unthinking devices that do the statistical averaging (smoothing over the false positives and negatives) and effectively treat the environment in a statistical, bulked parameter sort of a way (with no conscious observer required). It is the interaction of the organism with the environment that leads to the notion of there being a flow to thermal time (NS, 06-Jul-2019, p32). Each living cell manages to achieve a reduction of entropy within the cell membrane, keeping it all well away from thermodynamic equilibrium, through repair and growth (and cell division) albeit at the cost of having to take in low-entropy nutrients from outside, and to expel high-entropy waste products back out again. (The downside for any organism being particularly good at doing this is that, ipso facto, it looks like a source of low-entropy nutrients for the next organisms up in the food chain.)

This is graphically true of the herbivores that usurp the bodies of the plants, and the carnivores that usurp the bodies of the herbivores. At the head of this chain, though, the photosynthesing plants take high entropy nutrients and produce low entropy sugars, via endothermic reactions that involve the capturing of the energy in sunlight. If, Hofstadter-like, colonies of insects can be contemplated that have more intellegence than their component parts, then so too, Attenbrough-like, could networks of plants, or bacteria even (and ultimately, on to the Giai hypothesis). Of all the living organisms on this planet, the clade that has dominated this planet since the demise of the dinosaurs is the Angiospermae (the flowering plants). They are so successful that they have enslaved two classes (insects and mammals) into doing their work for them (pollination and seed dispersion, and as David Attenborough noted on the role of elephants, forest clearance). One particular family (the grasses) has focused on enslaving one particular species (Homo sapiens) to turning over vast swathes of land to its propagation (lawns and parks, but mainly fields of wheat, maize and rice).

Possible measures for the fourth law of thermodynamics

Prigogine et al. (1984) were particularly interested in systems that were permanently held out of equilibrium, and Lovelock noted that life is recognisable by the persistence of such a condition. Deacon (2012) distinguishes between simple teleodynamic processes (those that work to maximise energy flow) and complex morphodynamic processes (those that sequester away a private energy reservoir for future use).

Tsallis (NS, 27-Aug-2005, p34) proposes a formula for computing the entropy of an out-of-equilibrium system that happens to give the correct power-law, pq, to generate Boltzmann statistics for systems that are close to equilibrium, with q close to 1, and also seems to work for higher values of q that have external energy sources and that are far from equilibrium.

One candidate would be the orinciple of maximum entropy production (NS, 06-Oct-2001, p38). Another candidate is the Onsager reciprocal relations which, to quote from Wikipedia, "express the equality of certain ratios between flows and forces in thermodynamic systems out of equilibrium, but where a notion of local equilibrium exists". At the system (organism) level, measures are proposed based on body-part complexity (NS, 05-Feb-1994, p37). Crutchfield and Young (1990) propose yet another a way to measure complexity.

The abiliy of the living cell to reduce the entropy of its innards, at the expense of it increasing externally, can be likened to the S=H+K formula (Zurek (1990)) where H is the Shannon entropy of the given sequence (statistical uncertainty) and K is the algorithmic complexity (algorithmic randomness) with a gradual decrease in the former balanced by a corresponding increase in the latter (but only made possible in an out-of-equilibrium universe, such as ours). Kondepudi (1990) notes that natural selection, and the processes of evolution, act on K, to keep it as low as possible, but that it is only H that allows machines to do work, and hence that for Maxwell's demon, E=H.k.T measured in nats, and T is the temperature of the heat-sink. Blank memory is like a cold sink at absolute zero, with the lowest Shannon entropy, H, even though writing useful memory subsequently increases the algorithmic information content, and reduces the algorithmic entropy, K. Sagawa and Ueda (NS, 14-May-2016, p28) took this further and proposed (born out by experimental results) that an extra term needs to be added to account for mutual information, and the way that the act of measurement leads to correlation between the system and the apparatus of its memory.

It can be further noted that, as microelectronics fab lines can testify, sequential logic hardware exhibits much more complex behaviour than does combinatorial logic.

For a computer algorithm, latency=(O+D).τ while throughput=W/(O.τ) where W=F.d, where F is the level of the programming language or instruction set.

Free-will

The points of bifurcation are inevitablely, though not vnecessarily very usefully, suggested as a possible entry point for our notion of what constitutes free-will. This leads to a discussion of a possible principle of relativity for determinism and free-will. First, though, we need to consider what constitutes Will, free or otherwise.

Looking for a possible definition for Will

This chapter considers a number of experiments involving two beings, A and B (each a conscious robot or human, and traditionally called Alice and Bob) and several objects, X Y and Z (such as small rocks or parts of a spacecraft), isolated in the depths of space. When A tries to exercise her will in the combined system, A+B+X+Y+Z, she notices that she meets with some resistance. B does not do everything that A wants, but then neither does X (constrained by the laws of physics). We are apt to declare things (including inanimate objects) as having a will of their own when they do not stay as we expected or intended them. Perhaps this gives us the basis of what we mean by Will (as distinct from free-will): the ability to push a target system away from its initial position. It might be tempting to wonder on a connection with Schopenhauer's "The World as Will and Representation".

In the system A+X+Y+Z, A can notice a symmetry between her relationship to each inanimate object (X, Y and Z). Moreover, the objects can be considered grouped as a single compound object, XYZ, such as a spacecraft or rubble-pile asteroid, and become her reference, with respect to which all measurements are taken. But then, the being, A, also has component parts and limbs (A=h+a+a+t+l+l), and also tools. Her extended-phenotype blurs between being thought of as external objects, and being part of the being's being. Finally, if A exerts her will on an object (a bowling ball, a fellow human being) by rolling it up to the top of a hill, to watch how it rolls back down again, that object becomes part of her extended phenotype.

Thus, we can conclude that A exerts her will on a long-case clock when she winds up its weights. But then, does this mean, by extension, that we think of those weights as exerting their will on the pendulum, to prevent it from slowly gliding to its rest position? Perhaps the definition of Will is more nuanced than this. The falling weight part of the long-case clock merely acts to maintain the status quo on the pendulum part, but perhaps we are more concerned with the ability of an energy flow (from the hot bath to the cold bath) to build new structure, such as convection currrents and braid plains. Indeed, one proposal for a fourth law of thermodynamics takes the form of a 'principle of increasing complexity'.

Bifurcation

Points of bifurcation are the seemless branch points in the current trajectory of the system, where either one path or the other could have been chosen. If there is a cause-and-effect event that occurs in the brain as a result of an expression of free-will, it has to result in a blurred, smoothed-out change. Some wonder whether this could be the delicate, highly sensitive, places where some otherwise external, ephemeral effect could exert an influence; however, this would merely move the problem, and so is not particularly useful. Meanwhile, with Nahmias' development of Libet's experiments from the 1980s (NS, 27-Sep-2014, p11), we instinctively think of it as being some sort of instantaneous decision-making process: either I have decided to move my wrist, or I have not. But this cannot be the case. Complex ideas, including decisions as to whether to go for one or two mashmallows, must involve such large expanses of neural network that even electromagnetic radiation takes a finite time to cross, let alone neural signals. Even putting this aside, the universe (at least the classical one) must work with analogue functions that gradually build up, rather than digital ones that switch instantaneously between representing a zero or a one. A cause (like closing an electrical switch, or pushing on a brass lever in a Babbage-like machine) must lead to a gradual, smoothly ramping-up effect, as the electrons and atoms accelerate. There cannot be a sudden discontinuity in the displacement, or the velocity, or the acceleration, or the rate of change of the acceleration, or of the rate of change of the rate of change of the acceleration, and so on indefinitely up to the nth order differential. Interplanetary transport networks (NS, 25-Mar-2006) steering between Lagrange points (NS, 21-Feb-2009) can be visualised, too, for electrons teatering between taking one path or another, to change (or not) the bit stored in the memory cell, or to take the conditional branch (or not) in a computer program. Similarly for a pencil that is momentarily balanced on its point, being nudged from one path to another, in a seemless way, by a system that an outside observer might otherwise have labelled as displaying chaotic behaviour.

Flat-functions (non-analytic smooth functions) such as {IF(t>0 AND ClutchEngaged): exp(-1/t); 0} and Friedrichs mollifiers are ways of modelling this smoothed-out transition, with a piecewise approach to handle the point of bifurcation. Curiously, though, in the case of the car clutch example, it is the deterministic system (the mechanics of the car) that has the flat-function properties attributed to it, not the system that supposedly has the overriding will (the human driver). So, all flat-functions have succeeded in doing is indicating where the problem lies. It is a bit like putting down marker flags in a mine field: nothing has been defused yet, but at least we have pinpointed where the explosives are located. However, the flat-function does also correctly capture the constraint that no part of the transition, even a smooth and continuous one, can start before t0, since the effects cannot anticipate the cause: it has no possible Taylor expansion. Indeed, an agent with overriding will is not in fact relevent here; even in a driverless car, the components on both sides of the clutch do not know, in advance, that the car is about to pull away. Similarly for a snooker ball about to be hit (and equally for the one that is about to do the hitting); at the moment of collision, our analytical tools consider them to be a single system, and their respective states to continue to be an evolution from that.

Looking for a possible definition for Free-will

In the human-centric view, the pinnacle of the process of self-organising complex structures is the evolution of those that led to the emergence of consciousness. Perhaps related to the hard problem (NS, 29-Jan-2022, p48), it is not clear what it is that gives our feeling of having free-will (NS, 21-May-2016, p32) with the possible implication that it might be connected with what gives us a feeling of the flow of time (NS, 04-Feb-2017, p31) and, although none of the interpretations capture it (NS, 02-Nov-2013, p34) it is presumably connected to the laws of thermodynamics.

Meanwhile, for free-will, science has observational data, from many thinkers, even in metaphysics, over the millenia, to draw on, but is forever seeking a way to piece those observations together. 'Free-will' could be the name given when each entity is a component of an encompassing society whose persistence depends on the individual "I will"s having to give way, when appropriate, to a top-down "thou shalt" (for the communal good) thereby leading to a view in which the individual is not the level of paramount importance. Even a society that is purely deterministic could set up a judicial system to eliminate wayward elements, as part of its persistence mechanism.

Events can either be caused (by previous events) or be spontaneous (random, uncorrelated) and our intuitive notion of free-will cannot be the result of either (NS, 12-Jun-2021, p28). Free-will (NS, 03-Sep-2016, p35) cannot be the opposite of deterministic behaviour, but a rich, chaotic behaviour on the boundary between deterministic behaviour and spontaneously random behaviour (NS, 18-Apr-2020, p25). The internal mechanisms, of the thing that reputedly has the free-will, evaluate the various alternatives, with a view to choosing one, and to execute the actions to deliver the outcomes associated with that choice. Free-will could be just the name that we give to behaviour that is dominated by internal interactions (internal causes) within the system (NS, 06-Apr-2019, p34) measurable using Tononi's integrated information theory. Between changes of speed or direction, all of the actions of the components of a clock (or, more markedly, of a car with a clutch that is about to be engaged) are smooth, continuous, and deterministic, following the usual laws of motion. Likewise, for a human mind, immersion is the feeling that the tool (the blindman's cane, the tennis pro's racquet, the driver's car, the organism's own body parts) has become part of the extended self (NS, 12-Dec-2020, p42). Maybe this kicks in when the expressions of free-will, via piecewise discontinuities, are imposed on the components so frequently that the periods of continuous behaviour are vanishingly short; and a view that would not be inconsistent with Libet's observations (NS, 11-Aug-2012, p10).

Principle of relativity for determinism and free-will

The more deterministic the hardware of the universe is, (Den83: NewSc) the more able are its organisms to choose how to avoid the inevitable (thereby making it non-inevitable, after all).

Panpsychism proposes that consciousness lies on a continuum (NS, 20-Nov-2021, p42; NS, 02-May-2020, p40) with the human mind defined to be unity, and rocks and subatomic particles down close to zero, but not quite zero (NS, 02-Apr-2022, p38).

Since the hardware of the universe is not quite perfectly deterministic (but is, even so, much closer to it than it is to being perfectly random) then the act of copying will never be 100% accurate, and evolution will inevitably arise.

Maybe there is a principle of relativity that can be applied here (if free-will and physical laws are as indistinguishable as gravity and acceleration are to an observer in a windowless spaceship (Sch79)). And freedom, at least, is indeed relative, as an affluent first-world voter ought to admit to a starving third-world peasant.

In the experiment at the start of this chapter, it would be harder for A to be a solipsist, than in an experiment (2) with (A+B) alone, where each can consider the other as an object, like (A+X), since A observes that B appears to exhibit freedom relative to X, Y and Z. But X, Y and Z are also constrained by laws of physics. free-will and physical laws are as hard to tell apart as acceleration and gravity (syntactic information and semantics?). It would be easy for A to be animist or even pantheist,

The being (A) all alone, has no way of knowing that she has any freedom. But if, even in REM sleep, the being 'thinks, and therefore is', without needing to look outside the carriage, she knows and feels the force of her own free-will. Moreover, always the subject, never the object, consciousness is not observable from outside the carriage, looking in. But free-will, of the objective type, is indeterminable. There is no 'outside of the carriage' at which to look. Even another being cannot tell for her, since he is in the same carriage. Consciousness in others is assumed by animism and extension: the similarity of other human beings to ourselves, in body and behaviour.

Consciousness

The model of thinking, including deductive logic, that we have maintained since the ancient Greeks, might need to be superseded (NS, 27-Feb-2016, p34). There is even a suggestion that the scientific method might be too stringent, and that we might need to entertain theories that will always be beyond experimental testing (NS, 27-Feb-2016, p38). Indeed, there are already problems with replicating published experiments, and the associated difficulty publishing papers that report negative results (NS, 09-Apr-2022, p45). Moreover, it is noted that scientists actually spend most of their time building up the weight of confirming evidence, rather than applying scientific method, looking for contradictorary cases (NS, 10-May-2008, p44).

Perhaps related to the hard problem (NS, 02-Apr-2022, p38; NS, 29-Jan-2022, p48), it is not clear what it is that gives our feeling of having free-will (NS, 21-May-2016, p32) with the possible implication that it might be connected with what gives us a feeling of the flow of time (NS, 04-Feb-2017, p31) and, although none of the interpretations capture it (NS, 02-Nov-2013, p34) it is presumably connected to the laws of thermodynamics.

Consciousness is perhaps just an impression that the subconscious brain concocts to give it a survival advantage (NS, 15-Aug-2015, p26; NS, 07-Jul-2007, p36) as a sort of cognative prosthesis (NS, 07-Sep-2013, p28). This would be compatable with Libet's experiments (NS, 11-Aug-2012, p10) and might also explain the tendency of the mind to entertain notions of pantheism, and post-event rationalisation. Consciousness might be just a shortcut that has evolved for handling data compression (NS, 25-Nov-2017, p44) or a model that the brain maintains of its own operation (NS, 21-Sep-2019, p34) as the control system for the body. The brain needs an internal model (implicit or explicit) of each of the things it is controlling; the phenomenon of 'phantom limbs' reported by some amputees might imply that we all have these phantoms, but that we tend not to notice them when all is normal, and that these are the brain's normal internal representations of those body parts. The brain needs to control itself, too (to optimise resource allocation, via some sort of 'focus of attention') so consciousness could be the brain's internal representation in this (NS, 10-Jul-2021, p34). This would go a long way to explaining the so-called 'hard problem' since the brain would be ascribing the feelings of red, or bitter, or pain, or happiness, to this internal model; and it might also explain why we find the homunculus idea so appealing in all our thinking of brain operation. This also argues against the possibility of having consciousness in a disembodied brain (NS, 27-Jun-2020, p28). In the extreme, Boltzmann brains (NS, 18-Aug-2007, p26; NS, 28-Apr-2007, p33) are hypothetical self-aware entities in the form of disembodied spikes in space-time (more common in regions of high entropy than low entropy) but Sean Carroll presents a counter-argument for their possibility (NS, 18-Feb-2017, p9).

Rather than our brains analysing incoming signals, finding patterns of ever-increasing complexity, and making sense of them by matching them against the internal representations, it is the other way round (NS, 08-Jun-2019, p38; NS, 09-Apr-2016, p42, and also p20): our brains generate the anticipated sensory data to match the incoming signals, using internal models of the world (and body), thereby giving rise to multiple hypotheses, with the most probable one becoming tagged, Dennett-like, within the 'distributed self', as the one that will be considered to be our perception, using a type of Bayesian analysis (NS, 31-May-2008, p30). Dreams, hallucinations and tinnitus can therefore be considered to be signs of a correctly working brain in the absence of sufficient sensory input (NS, 05-Nov-2016, p28). Such Bayesian-updating (NS, 26-Sep-2020, p40) also has parallels to the scientific method (coming up with models to explain the observed data, actively setting out to observe new data, and keeping the model no more complicated than necessary) being just the way that the human mind has been working, naturally, anyway. Each hypothesis can then be refined in the light of the error signals that are generated using a process of 'prediction-error minimisation' (NS, 04-Sep-2021, p44). The mere repeated occurrence of this process might also be what gives the brain a continuous assurance of its identity (NS, 03-Sep-2016, p33).

Brainstorming

When a person dies, we can caount up the mass that they leave behind, along with any charge and spin. Each is just a simple number. What about syntactic information, and semantic information? Affinity within the extended phenotype (I think of my arms and legs as being part of me, and my home and tools). They have been acquired gradually through teh person's life-time, sorting and filtering what to gather and what to discard. Brainstorming might involve word-association, for syntactic information, and idea-association for semantic information (including by semile, metaphor, analogy, model, and allegory) followed up by curation.

On to the next chapter or back to the table of contents.

Top of this page Home page Services Past achievements Contact Site
map
Page d'accueil Services Réalisations précédentes Contact
© Malcolm Shute, Valley d'Aigues Research, 2006-2024