Taking as its starting point the
incompleteness results of
Goedel
and
Chaitin, and the
Heraclitian view
of existence as
flux, or a continuously changing process,
process
physics attempts to model
fundamental reality as a Heraclitian Process System (HPS) rather than as the behaviour of a set of discrete objects.
Physics to date has largely followed the intuition of Democritus: that the universe is composed of fundamental particles, with a defined spatial position. This has led to very successful geometrical models (Galileo, Newton) of physical behaviour, culminating in Einstein's General Relativity.
However, it is common to these geometrical models that they cannot account for the difference between past and future events, or the uniqueness of the present. To see this, we need only look at the equations of motion of any of those models. They are static functions, that is you can draw them as lines on a graph with time as one of the axes, and no one point on the line is different in kind from any other point - there is nothing to distinguish the 'now' from the other points on the line. This has lead some to assert that the universe is completely deterministic - in 4-D spacetime, the future 'already' exists.
The formulation of quantum theory, while allowing non-determinism through quantum randomness, gives rise to a different problem. The motion of particles is treated as the evolution of a wave function, but the 'size' of the wave function can be significantly larger than that of the particles whose motion it is meant to model. This was interpreted to mean that the wave function gave a probability distribution for these particle events (for example, the triggering of a detector, eg a ccd in a two-slit experiment.) But nowhere in the formalism is there a description of these
events - it deals only with the wave functions.
These problematic events were brushed under the carpet with the Copenhagen interpretation which invoked an extra-systemic element (the observer) to account for them.
As if these problems weren't enough, incompleteness results (such as those of Goedel, Turing and Chaitin) have established that any formalism of sufficient richness to support self-referential statements will necessarily admit of 'random' truths which are not theorems of that formalism. If we wish to describe physical behaviour by means of a formalism (and this is the direction taken by two and a half millennia of physics) then these 'random' truths may be regarded as uncaused physical facts - something of an embarassment for a hard science!
In contrast to these approaches, process physics attempts to
"model reality as self-organising relational information [... taking] account of the limitations of formalism or logic by using the new concept of self-referential noise."
The self-referential noise (SRN) concept seems to be based on the idea that a self organising system (cf. Prigogine) even if closed, may give rise to intrinsic randomness.
This counter-intuitive idea is justified by considering the Goedelian perspective that truth can not be represented by finite means inside a self-referential system, and Chaitin's demonstration that even arithmetic is subject to fundamentally irreducible randomness.
Taking the view that arithmetic, numbers themselves, even, are emergent phenomena - as numbers are conceptually founded on the flawed and posterior notion of objects - process physicists turn this into an asset and choose to encapsulate the phenomenon as SRN, seeing this as the basis for quantum indeterminacy and the very contingency of contingent truths: the 'measurements' of the Copenhagen interpretation.
Process physics, therefore, seeks to achieve universality by modeling fundamental reality using a fractal (ie. scale-neutral) process-space taking the form of a directed graph, of which the natural measure is the connectivity of pairs of its nodes. The elements themselves (called monads, after Leibniz) are simply similar directed graphs. The fractal nature of these graphs is imparted through their generation by a non-linear iterative process. The SRN appears as a noise term in the iteration.
This represents an attempt to side-step the problem of fundamental constituents (Democritus' atoms, if you like) or, as they put it, 'bootstrap' their description of the universe. The focus on the structure of the fractal graph (a directed graph whose every node is itself a a directed graph) is intended to allow the requirement for objects (the 'nodes') to drop out of consideration.
Oo-o
/||
o o-O
/ \|/
/ O
o-O o / \
|/ / \ O-o
O-o---/---------\----------------o-O
o/|\ / O /O |
Oo-O-O /|\ o-|-o
\ o--Oo / O
\ | |/\ /
\ O O \ /
\ / \ /
\ / \ o /
O-O / o+Oo
/|O-o o
O o_\! OOo
O--------------o-/o+o
O |O
| o
o
A crude ascii representation of part of a fractal tree-graph.
The apparent nodes (o,O) are actually similar structures.
I am by no means qualified
to summarise (let alone assess!) the maths involved, but will
attempt to give my rough understanding/paraphrase of how the details
are worked out.
Some notation:
Bij is taken as a real number, representing
the connectivity ('relational information strength') between the two
monads i and j.
It is stipulated that
Bij = -Bji
(anti-symmetry) so that Bii is guaranteed equal
to 0. This is so that self-connections, which should properly be part
of an individual monad are not taken into account at the wrong
level.
The iteration:
Bij -->Bij
- a(B-1)ij
+ wij
where i,j =1, 2, ... 2M, M-->infinity
starting with
B ~= 0, is used to generate the fractal
graph. The noise term (
wij) represents
an 'independent random variable for each
i j
pair and for each iteration.'
The properties of the graphs so produced depend (we are assured :)
on the probability distribution for the noise term
wij. If high values are sufficiently
unlikely, then 'long' edges, connecting distant nodes (or monads)
will be rare, leading to a tree-like structure whose nodes are
similar tree-like structures (a 'tree-graph'). The 'nodes', or relatively isolated subtrees,
are termed 'gebits': units of geometrical information, after qubits.
All this is pre-geometric, but the dimensionality of the space into which the structures have a natural embedding can be calculated as a function of the probability distribution. When large values of wij are sufficently rare, this dimensionality tends towards the value 3 - that is to say that the number of nodes reachable from an arbitrary starting node rises as the square of the number of edges that may be traversed in order to 'reach.' This matches the operation of the inverse square law in our 3-dimensional reality.
If the probability of a high value for wij is slightly larger than that needed to ensure a strict 3D embedding,
there arise 'topological defects' (with respect to 3D space) that take the form of extra links within the gebits. These topological defects are preserved throughout the self-replication of the gebits through the iterative process, in much the same way that the replacement
of each individual atom in a knotted string by a different atom will still leave the string knotted.
I think we are to take it that the self-replication of the topological information through the iterations is what we more usually refer to as the propagation of particles through space.
To 'track the coarse grained behaviour' of the system - particle physics - a Schroedinger equation (not reproducible in my html, sorry; see the paper referenced below) is used with a configuration space of all possible embeddings of one space inside another (these embeddings are called 'homotopic mappings' - hence the resulting quantum theory is called quantum homotopic field theory, or QHFT). This equation introduces
quantum state diffusion, or QSD, terms which are stochastic and
non-linear and are seen to be responsible for the collapse of
the wave function which takes place during measurement:
"The random click of the detector is then a manifestation
of Goedel's profound insight that truth has no finite description
in self-referential systems. The click is simply a random contingent
truth."
Further, quantum
superpositions and
non-locality are seen to arise because the
"topologically encoded information may have more than one 'foot-print' in the process-space;" that is (I think) the topological defects may take the form of 'wormholes' strongly connecting parts of the process space which, in the emergent geometric space, are distant.
I remain a little skeptical about the validity of all of this (as physics) and would love it if someone more knowledgeable could add a more critical writeup, below - in the absence of this, we may note that no use is made of Godel's or Chaitin's mathematics, other than to take them as a kind of interpretive hint; but it does provide encouragement for those of us who increasingly are inclined to view physics as a special branch of Information Theory. (Also, I am reminded of the 'physics' in Greg Egan's 2002 book, Schild's Ladder - an introduction to which can be found on his website.)
I will leave the last words to the authors of the paper from which I have been quoting:
Process physics is seen to realise Wheeler's suggested informational 'it from bit' program via the sequence: 'bit -> gebit -> qubit -> it', but only by modelling Goedelian incompleteness at the bit level. Process physics is at the same time deeply bio-logical - reality is revealed as a self-organising, evolving and competitive
information system; at all levels reality has evolved processes for replicating information.
Hmmm. "A self-organising, evolving and competitive information system". Now what does that remind me of?
Process physics web page:
http://www.socpes.flinders.edu.au/people/rcahill/processphysics.html
All quotes from:
http://www.socpes.flinders.edu.au/people/rcahill/0009023.pdf