English | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Home page | Services | Past achievements | Contact | Site map |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Page d'accueil | Services | Réalisations précédentes | Contact | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Français | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
There are three functions that are common to all computers, imperative or declarative, that have ever been built, designed, or merely contemplated: processing, memory and communications. Each one can be characterised by measures of latency and throughput (where the throughput tends to be called the bandwidth, in the case of communications, and the latency tends be called the execution time, in the case of processing). Communication involves the transfer of information over space, and memory involves the transfer of information over time (albeit only in the forward direction). It is not surprising, therefore, that the two are similar, and are analysed in the same way in Shannon information theory: represented on a "T" diagram, with input data coming in from the left of the diagram, output data leaving from the right, and noise being injected by the medium from the third arm. It is tempting to wonder whether processing, too, can be considered in the same way, as a type of communication, though also noting that processing is the odd one out, since the other two are already united in their reference to the four dimensions of space-time. |
The three types of implementation of memory can be categorised as: communications-based (delay line), pure memory based (decay law), and processing based (regenerative gating). For the first two, unintentional losses, and divergent collisions with molecules of the environment, lead to an exponential decay-law and corresponding time-constant; while, for the last case, the unintentional losses are compensated for, by energy input, and still converge on a stable state.
Delay-line memory is an example of how communication in space can be turned into memory simply by looping the communication channel back to the transmitter, and demonstrates the intimate relationship between the way we measure time, and the entropy increase that is involved in memory storage. Similarly, communication in space can be implemented by decay-law memory. For example, ink marks on a sheet of paper can be physically transferred in the postal system, and USB memory sticks can be carried to a new destination.
The wiring, Z=NOR(X,Y), results in the loss of information (if Z is a 1, the values of X and Y are known precisely, but if Z is a 0, there is only a 1-in-3 chance of guessing the correct values of X and Y), so, the "Qbar=NOR(Q,S); Q=NOR(Qbar,R)" SR flip-flop circuit uses this to implement a standard memory cell. It also shows that the laying down of memory, the direction of the processing, and the information destruction of Landauer's principle, are all consistent. |
Communication in time is memory, but in space is just communication, with a characteristic v=dx/dt. With a servo-motor, that v can be suddenly brought to zero, or reversed, to put the read-head at the chosen position. With processing, the power of the computer, or work done per second, is measured in instructions per second (like distance per second) times the force of the average instruction (the language level of the instruction set). Like the servo-motor, though conventionally run at top speed in the forward (convergent) direction, it can in principle suddenly be brought to zero, or reversed in the backward (divergent) direction. The convergent direction is used to reduce masses of raw data down to headline summaries on PowerPoint slides, or a list of integers into a sorted list (theoretically, a reversible process, since no integers are destroyed), or lumps of radioactive metal into decay products. (Nuclear reactions are easier to engineer to run backwards than chemical reactions, since it is not necessary to collect all the ingredients back again, where antiparticles can simply be emitted instead. However, it does still involve the convergent process of collecting in dissipitated heat, and concentrating it back in the atomic nuclei.)
The wheel cycle is achieved by one spoke-like lever after another. A heat-engine's job (to get from A to B, of to raise a load to height h) is achieved by multiple cycles of the component wheels (cranks and cams included). A computer's job (to sculpt away the extraneous information) is achieved by multiple heat-engine jobs.
Biological tissue is composed of heat-engines, lifting loads from high-entropy points to low-entropy points. A biological organ is composed of multiple tissues, each building a separate pile. A biological organism is composed of multiple organs, each building a separate structure (along the lines of Deacon's teleodynamic systems).
Communications | Memory | Processing | |
---|---|---|---|
Implemented by Communications | Communications | Since relativity and TD2 ensure that spacial communications is also a transfer in time, a delay-line loop can implement memory | Cannot be done, since passive filters, alone, cannot make an active one |
Implemented by Memory | The information is first stored within a physical device (a sheet of paper, a memory chip or floppy disk) and then is carried to its destination | Only dynamic memory is pure memory (capacitive DRAM, EEPROM, Flash, inductive core or bubble); the others are but engineering emulations | Cannot be done, since passive filters, alone, cannot make an active one |
Implemented by Processing | The side-effect of getting information from A to B, provided that it is configured as a passive filter, not changing the data that is passing through (the semantic input is set to 1 or ‘Id’) | Since forward and backward computation both have positive execution times, a level-restoring double NOR-gate loop, with parasitic capacitance, can implement memory | Processing |
The communication theorists' coat of arms, taking on the shape of a T, can be treated as a block in a circuit diagram, with a stream of 1s and 0s coming from the input, being merged in an exclusive-or gate with a stream of 1s and 0s come from the noise input. |
[Input]--->---[channel]--->---[Output] | ^ | [Noise] |
|
Shannon's great insight, though, was to abstract away from this. The input can be represented just as a 2x1 matrix of probabilities of a 1 occurring, or a 0, and the noise input by a 2x2 matrix of the probabilities of a 1 being corrupted to a 0, or vice versa, or to go through unchanged. For this, the operation represented by the block at the node of the T is that of matrix multiplication. |
The Shannon "T" diagram can be extended to "TTT", where the middle "T" still represents the point at which noise is being injected by the medium, but the first "T" injects a transformation (G) on the input data, and the last "T" injects the inverse transformation (H) to reconstitute the original data (Moser and Chen, 2012). |
|
Traditionally, over a low-noise medium (on the second "T") Gl can implement data compression, and Hl its inverse decompression; while over a high-noise medium, Gh can add a redundant coding, and Hh its inverse decoding. So, perhaps processing can be modelled using a similar diagram, via the G or H function. Since the action of a computer is to reduce information content, somewhat akin to Michelangelo chiselling out the redundant information to reveal the inner sculpture, we could either model the microcode of the processor as Gl in the low-noise medium, or Hh in the high-noise medium. The former would have Hl as some sort of anti-processing function, able (hypothetically) to beat the Turing Halting Problem. Alternatively, the latter model would have Gh as some sort of obfuscating process. An example of this might be, for example, the universe obeying very simple laws, like those of the principle of least action (Coopersmith 2017) but presenting very complication emergent behaviour to us on the first "T", with Tycho Brahe's telescope at the second "T", and Kepler's tomographic processing of the data back down to simple laws at the last "T".
One question that this then poses is whether it is necessary to consider the inverse transformation, even if it is not used in general practice. This would only be possible if the CPU is implemented as a reversible computer, using Fredkin gates rather than information-destroying NAND and NOR.
Both Shannon and Turing abstracted away from the details of the implementation. A channel carries information, regardless of wwhat that information might be (and there are no violons inside the jukebox); a computation proceeds through the steps of the algorithm, regardless of the hardware that is being used. The locomotive could be transporting information, in its train of carriages, from Paddington to Temple Meads; working as a UTM though, that information never arrives there, and is merely shuffled up and down the Turing tape. So, perhaps processing is not a type of communication, after all.
If the information in the carriages is used en route, it becames part of the state machine. Analogous to Kirchhoff's current law, if more information is flowing in to a point than is flowing out, the surplus mpst either be going in to storage or else leaking out as dissipation. (if more information is flowing out of a point than is flowing in, the surplus mpst either be coming out of storage (and erased from there) or else leaking in from the outside, either way involving the in-flow of energy.)
At the next MC level on from this, are we talking about semantics, meaning, empathy, beliefs, consciousness? Perhaps simply arranging for the steam locomotive on the track to author a story. Stories acquire new information each time they are retold (Alan Turing sipping the froth of his glass of beer, Kate Bennett saying "save your breath to cool your porridge", Dawkins on the use of "virgin" in place of "maiden".) Just as a pencil balanced on its point acquires new initial-condition, breaking-symmetry information as it starts to fall. Jane Austen dramatisations presented as nice and rosey stories, rather than as agreessively biting observations of people, or orchestrations of Bohemian Rhapsody as sweet gentle tunes.
Non-commutivity will arise when there are side-effects involved. (This is not a necessary condition, though, since symmetry can lead to non-commutivity in the sequential quadrant rotations of an object about its x, y and z axes.)
In by-gone days, I wrote my files to floppy disk at night, communicating them to the future, to read back the next day (perhaps on another computer). When I read my files from floppy disk the next day, to write back that night (having edited or otherwise modified them) was I communicating in some way? I would say “I am editing” or “I am compiling” or “I am processing”. "To edit" is to merge two streams of data: the old version of the syntactic information from the disk, and the incremental semantic changes within my head. “To compile” is to merge the syntactic source code information (already on the disk) with semantic rewrite rules embodied by the compiler. "To process" is to merge the syntactic input file with semantic rewrite rules embodied by the instruction set. Processing is necessary for presenting information (such as tables, graphs, lists, and diagrams); Syntactic-compression, too, for transmission or storage. The informed user sends information to the uninformed user in a different mind-set. One user has raw data, and has to work out the results (maybe in his head); the other is given the results, ready and all worked out. Processing is necessary to establish relevance. Michelangelo still needed to chip away the unwanted stone, to reveal the already existing statue. You want to learn about the Cretacianeous period? Here is a mountain of limestone layers. You want to learn how to use your HP Vectra PC? Here are the circuit diagrams and object code. You want to learn about parquet flooring? Here is an un-indexed pile of back-issues of DIY magazines. You want to learn about the Norman Conquest? Here are the current positions and velocities of all the particles in the universe. This is related to the question of whether chaotic outcomes, such as the value of the next pixel in a Mandelbrot set, is pre-computable like the outcome of a coin being tossed.
Communications | Memory | Processing | Contextualisation | Empathy | |
---|---|---|---|---|---|
Speech and I/O | passive | topological (delay line) | |||
Writing and storage | active | passive | topological (mapped on the page) | ||
Processing and CPU | active (regenerative) | passive | topological (mapped on the page) | ||
Semantics and Next MC | active (Platonic space) | passive | topological | ||
Empathy and MC after that | active | passive |
Speech: an invention between the lever and the wheel; for communicating information (communicating it in space). Writing (and later printing): an invention between the wheel and heat-engine; for communicating information, and of holding it in memory, too (communicating it in time, albeit only forwards). The telephone (and later email): an invention between the heat-engine and computer; for communicating information, and memory and processing (with deductive logic in the telephone exchange). The Semantics System: an invention between the computer and the next MC; for communicating, memorising, processing information, and something else: contextualisation, perhaps. The Empathy System: an invention between the next MC and the MC after that.
Human speech is passive communication: the message appearing over time (Table above). Human speech is topological memory: folk history passed by delay-line word-of-mouth. Human writing is active communication: written communications are more concrete than spoken ones; and written contracts more than oral ones. Human writing is passive memory: ink on paper has a time constant. Human writing is topological processing: the successive states of a computation mapped out across the page (Table below). Human processing is active memory: regeneratively repeating something over and over while it is still being needed. Human processing is passive processing: by the same token as the naming of passive communication and passive memory. Human processing is topological contextualisation: whatever the next MC does. The human equivalent of the next MC is active processing: mathematicians discover truths in Platonic space, perhaps. The human equivalent of the next MC is passive contextualisation: whatever the next MC does. The human equivalent of the next MC is topological empathy: whatever the MC after that does.
In speech, it is the human mind that provides the memory; passing down a story by word-of-mouth. In writing, it is the human mind that provides the processing. In computing, it is the human mind that provides the semantics, appreciated by the programmer, not the computer. The value '6', in 'CELL', is ungrounded without the external context: is it six dollars, six balls in a cricket over, six sides for a polygon, sixth time round the loop, a RTT instruction, or something else?
a | b | r | |
---|---|---|---|
33932 | 68034 | ||
1: r := a mod b | 33932, 170, 102, 68, 34, 0 | ||
2: a := b | 68034, 33932, 170, 102, 68, 34 | ||
3: b := r | 33932, 170, 102, 68, 34, 0 | ||
4: IF b > 0 THEN GOTO 1 |
This rolls out in time as follows. The column on the left contains the contents of the program counter (which might or might not be memory-mapped, cycling six times over the labels for L1, L2, L3 and L4; each cell can either show no change in its value, or a single new value written in; and the four instructions are listed horizontally along with a, b and r (assuming a stored-program architecture) just as an array of variables would be depicted, indexed by the program counter, but with values that are never changed (since there is no self-modifying code).
PC | a | b | r | L0 | L1 | L2 | L3 | L4 | L5 |
---|---|---|---|---|---|---|---|---|---|
L1 | 33932 | 68034 | NOP | r := a mod b | a := b | b := r | IF b > 0 THEN GOTO L1 | HALT | |
L2 | 33932 | ||||||||
L3 | 68034 | ||||||||
L4 | 33932 | ||||||||
L1 | |||||||||
L2 | 170 | ||||||||
L3 | 33932 | ||||||||
L4 | 170 | ||||||||
L1 | |||||||||
L2 | 102 | ||||||||
L3 | 170 | ||||||||
L4 | 102 | ||||||||
L1 | |||||||||
L2 | 68 | ||||||||
L3 | 102 | ||||||||
L4 | 68 | ||||||||
L1 | |||||||||
L2 | 34 | ||||||||
L3 | 68 | ||||||||
L4 | 34 | ||||||||
L1 | |||||||||
L2 | 0 | ||||||||
L3 | 34 | ||||||||
L4 | 0 | ||||||||
L5 |
On to the next chapter or back to the table of contents.