Albert Einstein, with his students Nathan Rosen and Boris Podolsky, were
the first to point out that the mathematics of quantum mechanics
entails apparent non-local connections, what would later become known as quantum entanglement. They used these connections to argue that
the quantum theory must be incomplete.
In an article known as the
EPR paper, published in 1935, they pointed
out that by making a measurement of the momentum of one particle, it is
possible to accurately gauge the momentum of another with which it has previously
interacted, however far away the second particle may be, as long as it has not
interacted with anything else since. Now, this leads to a problem: standard quantum theory asserts that a particle
does not have a definite momentum until it is measured, so something has to give here;
Either this assertion is incorrect, and the second particle does in fact have a definite momentum before it is measured (in which case some hidden variables would be required to make the theory complete);
or else the measurement of the first particle somehow instantaneously determines the state
of the other, however far away it may be.
Einstein called this 'spooky action
at a distance' - spooky because there is no known mechanism for such an
interaction, and because the relativity of simultaneity implied by the Special Theory of Relativity
means that two distant events which take place at the same time in one frame of reference
will occur at different times in other frames of reference - depending on which way an observer is travelling, one or the other will happen first.
This means that quantum action at a distance would mean things can be affected by events which, in some frame of reference, haven't happened yet - potentially, things which won't happen for several days.
Plausibly enough, Einstein et al dismissed this possibility out of hand, concluding that a particle must have a definite state whether we look
at it or not.
Three years earlier, however, John von Neumann had produced a proof
which was supposed to show that no theory which assigns definite states
(hidden variables) to particles could ever be consistent.
Von Neumann was among the most eminent and accomplished mathematicians
of his time, and it was accepted by almost everyone that he had proved
what he thought he had, even after Einstein et al's 'proof' of almost the
exact opposite three years later. This, together with Niels Bohr's implacable -
albeit inscrutable - opposition to Einstein's attempts to argue for realism
in quantum mechanics, led to the EPR paper getting a pretty short shrift in most circles.
In 1952, however, David Bohm succeeded in what von Neumann had apparently
shown to be impossible: He created a fully working 'hidden variables'
interpretation. His theory constituted a convincing counter-example
to the von Neumann 'proof', but was generally ignored or rubbished at the time and for a long while afterwards.
Many physicists seemed to believe rather vaguely that someone or
other had shown how Bohm couldn't possibly be right - that some gaping
flaw in Bohm's logic had been exposed, showing why his theory had to be
wrong. But there was no such flaw; Bohm's theory, though a little inelegant,
stood in stark defiance of the alleged impossibility of an ontological
interpretation of quantum mechanics (that is, one which tries to give a
description of the world itself, and not just rules determining what we
can say about it). John von Neumann had, in fact, been quite wrong.
The von Neumann impossibility proof depended on a postulate of additivity;
briefly, he assumed that since the results of measurements are commutative
on average in the quantum formalism, the results of individual measurements
should also be commutative. But it is not really difficult to see
from a simple example that this need not
be so - in fact, it cannot be true. It is simply wrong, and
when one tries to apply it to Bohm's theory its absurdity becomes quite
clear.
There are a number of extraordinary features of the history of von Neumann's
proof, and other related proofs trying to show the same thing. In
1935 the von Neumann proof was actually refuted - its fatal logical flaw
exposed - by a little-known female German mathematician by the name
of Grete Hermann. Nobody seems to
have noticed this until 1974. In 1952 David Bohm formulated
a theory which demonstrated quite clearly, for anyone who took the time
to look at it, that von Neumann couldn't possibly be right. In fact,
if von Neumann had tested his proof against Louis de Broglie's theory, the predecessor
to Bohm's account and the only putative hidden variables theory then available, he would have seen at once
that his crucial assumption did not apply to it.
But de Broglie's theory had been rubbished at the 1927 Solvay Congress, and von Neumann
doesn't appear to have even considered it. His theorem failed
to rule out the only existing example of the class of theories it was supposed
to prove impossible, but apparently nobody noticed this until much later.
When Bohm's theory was published it was attacked by several eminent
physicists, and subsequently it was widely treated as if it had been refuted.
But it had not been. As Bell put it '...even Pauli, Rosenfeld, and Heisenberg
could produce no more devastating criticism of Bohm's version than to brand
it as 'metaphysical' and 'ideological'.
People didn't just ignore Bohm's theory - they actually kept producing new
variations on the proof showing why it couldn't possibly exist, well after
1952, generally making similar mistakes to von Neumann.
When John Stuart Bell finally showed what was wrong with such proofs in
a paper which people actually paid attention to,
one might have expected that people would finally stop making the same
mistakes. But they did not. People kept on producing impossibility
proofs with closely related errors at least as late as 1978, twelve years
after their central fallacy was brought to the fore by Bell, twenty-six
years after a convincing counter-example was shown to exist - and forty-three
years after von Neumann's 'proof' had first been disproven.
Now in 1964, Bell produced an impossibility proof of his own, of a more
limited character (this was after he had demonstrated von Neumann's mistake,
but before the paper in which he did so was published). He sought
to show not that any 'hidden variables theory' of quantum mechanics would
fail, but only that they would necessarily entail a sort of inseparability,
or non-locality. In fact, he thought that the EPR argument was quite
persuasive, and had not been given the credit it was due; it just didn't lead him to the same conclusions as Einstein et al.
Bell produced a proof to show that if we assume the reality of
the variables in question (that is to say we assume the particle possesses
real, definite values of position, momentum, spin and so on at all times) and we assume
that Einsteinian locality holds - which is to say, nothing can ever have an effect on something else in less than the time it takes for a beam of light to pass between them - it is possible to derive a joint probability distribution
for measurements on the two particles which demonstrates that
they cannot possibly be correlated as strongly as quantum mechanics predicts,
as long both assumptions hold. That is, as long as the predictions of quantum mechanics are correct,
we must either abandon the expectation of locality,
or the idea that particles possess definite properties in between measurements. When Bell wrote this the possibility
remained that these particular quantum mechanical predictions would turn
out to be wrong, but it already seemed unlikely. When they
were finally tested experimentally, by Alain Aspect and others,
it came as no great surprise that the quantum mechanical predictions were
confirmed.
One of the simplest examples of a violated Bell inequality is a variation
on the EPR experiment proposed by Bohm and refined by Bell. An atom
emits two photons in opposite directions. When one of these photons
reaches a polariser, it will either be absorbed or transmitted. If
the other photon reaches a polariser at the same angle, it will be absorbed
if the first was absorbed, or transmitted if the first was transmitted;
the photons exhibit perfect correlation when the polarisers are aligned,
as if they were polarised at just the same angle.
If the polarisers are aligned at different angles, the photons will
sometimes both be stopped or both let through, and sometimes one will be
stopped and the other let through; we can say that they disagree in these
cases. The proportion of the time that the photons disagree is found
from cos2(θ1-θ2),
where θ1 and θ2
are the angles of the two polarisers respectively.
What Bell believed he had proved was that this level of agreement between
the photons was in excess of the level that any local hidden variables
theory could possibly predict. His conclusion was based on the observation
that any probability distribution which is factorisable cannot show correlation
in excess of a certain amount - an amount exceeded by the predictions of
quantum theory. He supposed that any local 'hidden-variables' theory
should yield a probability distribution of this sort - that is, it should
be possible to decompose the probability distribution into factors representing
the settings of the two polarisers and any additional local hidden variables.
Bell concludes:
'In a theory in which parameters are added to quantum mechanics
to determine the results of individual measurements, without changing the
statistical predictions, there must be a mechanism whereby the setting
of one measuring device can influence the reading of another instrument,
however remote. Moreover, the signal involved must propagate instantaneously,
so that such a theory could not be Lorentz invariant.'
Most quantum physicists and philosophers of science who have looked into the question agree that Bell at least proved something like what he said he had proved, and quantum non-locality, or entanglement, has come to be a more-or-less accepted part of the scientific landscape. However, the exact meaning of entanglement, with the 'instantaneous' action at a distance it seems to involve, remains unclear in a universe featuring Einstein's relativity of simultaneity.
There is still some disagreement about what exactly Bell's results show; not everybody accepts that they mean anything very much at all. In 1982 Henry Stapp produced similar inequalities without any reference to hidden variables, suggesting that locality would have to give whether or not we want to aim for a realistic interpretation of quantum physics. Attacking from the other side, Willem de Muynck has published a series of papers questioning the common conclusion that Bell inequalities sound a death knell for Einsteinian locality, on the grounds that it is just as possible to derive Bell inequalities for (some) non-local hidden variables theories as it is for local ones. Nancy Cartwright and Hasok Chang have argued that the joint probability distribution on which Bell's inequality rests is inappropriate when applied to a fundamentally stochastic theory. David Deutsch and Patrick Hayden have argued that the seeming non-locality of quantum physics is in fact illusory, since (they maintain) a careful analysis of the flow of information shows that it is all localised.
The problems around non-locality in physics continue to attract some of the best minds in the field, and they continue to reach wildly different, mutually incompatible conclusions on the topic. It looks as if this controversy is going to keep smouldering for a good while yet.
Selected bibliography
- John Bell (1987), 'Speakable and Unspeakable in Quantum Mechanics' (collected papers on quantum philosophy), Cambridge University Press
- Hasok Chang and Nancy Cartwright (1993), 'Causality and Realism in the EPR Experiment', Erkenntnis 38, 2
- John Gribbin, 'Schrödinger's Kittens and the Search for Reality'
- Willem M. de Muynck (1986), 'The Bell Inequalities and their Irrelevance to the Problem of Locality in Quantum Mechanics', Physics Letters A114, 65-67
- Willem M. de Muynck (1988), 'On the Significance of the Bell Inequalities for the Locality Problem in Different Realistic Interpretations of Quantum Mechanics', Annalen der Physik, 7. Folge 45, 222-234
- Willem M. de Muynck (1996), 'Can We Escape from Bell's Conclusion that Quantum Mechanics Describes a Non-Local Reality?', Stud. Hist.Phil. Mod. Phys. 27, No.3, pp. 315-330
- David Deutsch and Patrick Hayden (1999), 'Information Flow in Entangled Quantum Systems', Proc. R. Soc. Lond (http://xxx.lanl.gov/abs/quant-ph/9906007).
- Henry P. Stapp (1986), 'Quantum Nonlocality and the Description of Nature' in Cushing & McMullin (eds), 'Philosophical Consequences of Quantum Theory: Reflections on Bell's Theorem', University of Notre Dame Press
This piece is adapted from a section of my BSc dissertation on Quantum Entanglement and Causality, as is my Quantum Entanglement piece, which covers more about the practical implications of entanglement. The full text of the dissertation (partially re-written for a somewhat less technical audience) can be found at http://oolong.co.uk/Causality.html; its original bibliography, annotated, is at http://oolong.co.uk/Bibliography.htm