A dimensionless number, α = e2/4πε0hc = 1/137.03599

This number determines the strength of the electromagnetic force. More accurately, it governs how strongly a charged particle will interact with that force (i.e. absorb or emit photons).

This is one of the primary physical constants that determines the nature of the Universe. If it were a different value, electrons wouldn't orbit atomic nuclei in quite the same way, if at all.

If it were larger, you wouldn't be able to tell the difference between matter and radiation.

If it were smaller, particles might as well ignore the electromagnetic interaction.

Remember, however, that some of the constants in the formula above have dimensions based upon arbitrary human units; the value might not be as serendipitous as it first appears (the anthropic principle).

The fine-structure constant, α is the ratio of the quantum unit of electromagnetic force to the electron-based quantum unit of inertial force.

It plays a key part in Quantum Electrodynamics theory in which the electromagnetic force is transmitted by photons. In that theory the g-factor, or magnetic moment, of an electron is modified by a series of terms involving the fine-structure constant.

The fine-structure constant is so called because it is related to the fine-structure apparent in atomic spectra. These spectral lines are due to the transitions of electrons between available energy levels.

The Rydberg constant Roo which appears in the Rydberg equation relates the spectral lines to energy level transitions. It is given by

where me is the electron mass, c is the speed of light, h is the Planck's constant and α again is the fine-structure constant.

Thus, one way of determining α is by observing the atomic spectrum of hydrogen (for its simplicity) and determining Roo.

In recent years researchers have been checking if the fine-structure constant is a universal constant, immutable in time and space, by observing the atomic spectra of quasars billions of light years distant.

Recent results have indicated that α may not be a constant.

Evolution of the Fine Structure Constant

An old idea...

The idea that the constants might evolve over time is an old one, Paul Dirac first took these constants and combined them together to form dimensionless numbers. One example is the magnitude of the ratio of the universe's age and the time it takes light to travel across an electron, is very similar to the magnitude of the electrostatic and gravitational forces present in the hydrogen atom. (1040 compared to 1039) As the universe has of course got older, to keep this relationship constant, the values of the fundamental forces would have to change over time.

But Why?

It has long been known that the most correct theory of physics is incomplete, the standard model relies on certain arbitrarily determined constants as inputs, as 'givens'. New formulations of physics such as superstring theory an M-theory do allow mechanisms where these constants can arise from the underlying model.

One problem with these new theories is that they postulate the existence of extra dimensions, that are said to be 'compactified' down to (most likely) about the Planck length, where they have no impact on the visible world we live in. Saying these extra dimensions must exist, but on a scale we can't observe is presumably part of the reason Richard Feynman was heard to refer to superstring theory as nonsense.
Another problem is that is these extra dimensions are not fixed in size, the ratio between the compactified dimensions and our normal 4 space-time ones could cause some of the fundamental constants to change! If this could happen then it might lead to physics that are in contradiction to the universe we observe.
Of course the flip side to this is the problem of the constants not being fixed provides a means by which you can test part of the predictions of superstring and m-theory.

Another reason having the fine-structure constant change over time is that it allows you to postulate the speed of light might not be constant. This would explain the flatness, horizon and monopole problems in cosmology.
Recent work has shown the universe is expanding at an ever faster rate, and there may well be a non-zero cosmological constant after all. (Einstein said his inclusion of the constant to balance the books was his "worst mistake".) There is a class of theories where the speed of light is determined by a scalar field (the force making the cosmos expand, the cosmological constant) that couples to the gravitional effect of pressure. Changes in the speed of light convert the energy density of this field into energy. The basic upshot being when the universe was very young and hot during the radiation epoch, this prevented the scalar field dominating the universe, as the universe expands pressureless matter dominates, variations in c decrease (and therefore alpha becomes fixed and stable) and the scalar field begins to dominate, driving a faster expansion of the universe.

Whether the variation of the fine-structure constant claimed exists or not, putting bounds on the rate of change puts tight constraints on new theories of physics, making it very valuable research.

Measuring the Dimensionless

The fine-structure constant is of course one of those dimensionless numbers, and is actually one that it is possible to measure (or observe) to calculate any time variance of. By examining the natural nuclear reactor in Oklo, Gabon constraints can be put on the rate of change of alpha. This does not reach far enough back in time really though to really pin down any changes, and by the same token current laboratory measurements of alpha have to be so incredibly precise, experimental error is nearly impossible to rule out. Although one such experiment, based on comparing two clocks formed from 2 different atoms (a H-maser and a Hg+ maser) over 3 and a half months, gave the result alpha did not change by more than 3.7x10-14.

It turns out that by looking at the spectra of light from quasars that has passed through intervening dust clouds, the fine-structure constant can be calculated from the atomic absorbance lines in the spectra. By observing quasars of different redshift, the fine-structure constant can be calculated for different periods in the history of the universe. This experiment has been performed for sources 23-87% of the age of the universe by J.K. Webb (et. al.), and the results suggest that &delta ; ;alpha/alpha = -0.72 (+- 0.18)x10-5 (Over the redshift range 0.5 < z < 3.5, to 4 standard deviation)
A rough schematic of the system is shown below :-

               *     *     *                     __
                   * *  *                        __
###              M+    *  *
###~~~~~~~~~~~~~~~~~~>M+~~~~~~~~~~~~~~>@         ==
###             *   *  *              
                  *      M+                      __

Quasar         Gas Cloud            Detector   Spectra


The devil is in the details....

So the way it works is you measure the atomic spectra of a beam of light from a quasar that has been absorbed and re-emitted from a metal ion in an intervening gas cloud, and compare this spectra to a reference one obtained in the lab. Any difference between the two indicates a change of Ν.
The dependence of the wavenumber (wz) you observe to alpha is given by :-


Where x=( (alphaz/alpha0)2 - 1 ) and y=( (alphaz/alpha0)4 - 1 )
alpha0 is the present day value, and alphaz is the value at redshift of the absorption.
q1 and q2 are terms necessary to correct the relativistic effect of the electrons orbiting a given atom in a given electronic configuration. (Please see my write up in gold for a bit more on this.)

Originally measurements were made by looking at one doublet in the spectra of one species, e.g. MgII or SiIV (the alkali doublet method), but recent work has shown a ten fold increase in precision can be obtained using several species. Magnesium and Iron for instance, where the lighter magnesium provides an "anchor" point for the heavier elements. This "many multiplet" method allows the comparison of any combination of transitions from different species; usually a heavy and a light metal ion. This both increases the amount of data you can get from your spectrums, (which helps the statistics) and it eliminates a potential source of error, as the relativistic corrections can be both positive and negative. If your calculations were not complete for q1 and q2, then this helps compensate for errors.
In the Physics review letters paper, 87 091301, 2001 by J. K. Webb et. al. further sources of systematic, experimental error were identified; such as the variation of isotopic abundances, but all could either be discounted, or shown to increase the effect.

This is potentially Nobel prize winning stuff here I feel, if the work can be followed up and independently verified.

Cosmology Matters

Whenever you start messing with the fundamentals of physics, you can expect to see changes in the world around you, often this happens not on just the small scale, but of the largest scale possible, over the entire universe; changing alpha is no exception.
After the universe underwent cosmological inflation and cooled, the energy began to convert to matter, and nucleosynthesis formed the lighter elements. (Up to Lithium I believe; heavy elements need stars or even supernova to form.). Initially this was a super-heated plasma, and opaque to light, it's this "surface" that gives rise to the cosmic background radiation.
Any change in alpha would firstly alter the proportion of elements formed in the primordial post-big bang nucleosynthesis, and secondly affect the time it took the electrons to couple with the nucleons in the plasma, changing the "distance" to this surface.(i.e The redshift of the cosmic microwave background radiation.) Also the ratio of baryons to photons at the last-scattering surface. (Described above). Such effects could be observed on a universal scale.
Looking at the 4He abundance over the cosmos, it's been calculated by Kolb, Perry and Walker in 1986 that &delta ; ; alpha/alpha < 9.9x105, at the time of big-bang nucleosynthesis. (Corresponding to a redshift, z ~ 108 - 109).
The effect of a different fine structure constant should give rise to changes to both the amplitude and position of features in the cosmic microwave background radiation, that the next generation of space-based instruments should be able to detect.
Whilst studies of both of these studies have shown that perhaps the fine structure constant could have been smaller in the past, the results are not statistically significant.
One reason observations of the CMBR might not give significant results with respect to varying alpha, is that if dark energy exists, it's density moderated by a scalar "quintessence" field, then bounds on the amount of change of alpha imposed by the observations might have to be widened.

Where are we now

Well, the jury is still out on this one, there is quite simply not enough evidence to prove the fine-structure or any other fundamental constant changes with time.
If the constants can be proven to change however, it could possibly validate new theories of physics, and help confirm other hypothesis such as the existence of dark energy and quintessence; scalar fields involved in a varying cosmological constant.

Scientific American recently published an article that suggests that perhaps the fine structure constant isn't, in fact, constant. More specifically, astronomers have found observational evidence that seems to indicate that the fine structure constant was weaker by one part in a thousand around 5 billion years ago.

One the one hand, this seems almost irrelevant. A difference of one part in a thousand, 5 billion years ago, doesn't affect us directly in any way. On the other hand, it makes all the difference; if one of the so-called constants of the Universe is changing, this pokes holes in all sorts of current theories in physics.

Before we get too excited, however, let's remember that something like this comes along just about every year. Physicists are constantly discovering bizarre results that seem, for a while, to disprove some long-standing theory, or provide evidence for some new theory that was, until that point, just some physicists pet speculation. 99 times out of 100, there turns out to be a relatively simple explanation, and the whole thing blows over. Recall the cold fusion fiasco. In any case, there is one detail that makes this particular claim rather sketchy; the same evidence that indicates that the fine structure constant was different ~5 billion years ago also indicates that it was the same as today when the Universe was young, ~12-15 billion years ago. One finds it difficult to imagine some elegant theory that would result in the fine structure constant weakening by 1 part in 1000, and then returning precisely to its original value. My opinion is that some alternate explanation for the data will be found shortly.

A dimensionless fundamental constant of the universe, shown as α or αe with the ‘e’ for ‘electromagnetic’. Its value is approximately 1/137. α is a coefficient on the expression for how likely it is that an electromagnetic interaction will take place between two charged particles. In a Feynman diagram, each photon vertex introduces a factor of α attenuation in the probabalistic magnitude (in addition to a couple other matters).

There are similar constants for the other forces, but they are more complicated to use since the other forces are mediated by bosons which have mass. This gives equations mass terms, which really mucks them up since then you have to integrate over all possibilities of the momentum they could transfer. Photons have no mass, so they carry any amount of energy equally well, so the integration is far far easier.

The fine structure constant's full significance was (pretty much) discovered by Richard Feynman. Since 'Feynman' is pronounced including the sound 'fine', this connection should be easy to remember. However, it isn't quite that simple. The value of the fine structure was known to spectroscopists from the fine splitting and ultrafine splitting of hydrogen emission lines before its full significance was discovered. That is where it got its name. Now, it is known to be the fundamental electromagnetic interaction coefficient, and 'fine structure constant' doesn't really do it justice.

Ignoring electroweak unification

ugah174: I don't consider this 'derivation' to be at all significant. If the electromagnetic force has this strength for geometrical reasons, why aren't the other four force constants the same thing? They're equally constrained by geometry. Also, the derivation increases the number of magic numbers from one real number to two integers. Why aren't there 29 forces then?

Solution to a 20th Century Mystery

Feynman's conjecture of a relation between α, the fine structure constant, and π

James G. Gilson, j.g.gilson@qmul.ac.uk

Feynman's Conjecture

A general connection of the quantum coupling constants with π was anticipated by R. P. Feynman in a remarkable intuitional leap some 40 years ago as can be seen from the following much quoted extract from one of Feynman's books.

There is a most profound and beautiful question associated with the observed coupling constant e the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to -0.08542455. (My physicist friends won't recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with about an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.) Immediately you would like to know where this number for a coupling comes from: is it related to π or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the "hand of God" wrote that number, and "we don't know how He pushed his pencil." We know what kind of a dance to do experimentally to measure this number very accurately, but we don't know what kind of dance to do on the computer to make this number come out, without putting it in secretly!

The Solution

It will here be shown that this problem has a remarkably simple solution confirming Feynman's conjecture. Let P(n) be the perimeter length of an n sided polygon and r(n) be the distance from its centre to the centre of a side. In analogy with the definition of π = C/2r we can define an integer dependent generalization, π(n), of π as

π(n) = P(n)/(2r(n)) = n tan(π/n).

Let us define a set of constants {α(n1, n2)} dependent on the integers n1 n2 as

α(n1, n2) = α(n1, ∞) π(n1× n2)/π, ...........................†


α(n1,∞) = cos(π/n1)/n1.

The numerical value of α, the fine structure constant, is given by the special case n1 = 137, n2 = 29.


α = α(137,29) = 0.0072973525318...

The experimental value for α is

αexp = 0.007297352533(27),

the (27) is ± the experimental uncertainty in the last two digits.

The very simple relation † between α, the fine structure constant, π and π(n) confirms Feynman's conjecture and also his amazing intuitional skills.


For details of how the formula † was obtained and some of the consequences arising from it visit the website:-


Reply to Oneiromancer: There have been dissenters from every theoeretical physics construction ever published. However, in a public forum, cogent arguments are more to be desired than a case resting on I don't consider............

You suggest that if the electromagnetic force is geometrical then, the other forces should have the same value. This is equivalent to saying that all polygons should have the same size and the same number of sides. In fact, if you look at equation † again, you will see that it expresses a proportionality between values of α like representations and π like representations. This means that geometry alone may not be all that is involved. However, it seems to me, not to matter if only geometry is involved. After all, what is general relativity if not just geometry? The integers 137 and 29 are quantum numbers specific values of the pair (n1, n2). Other specific values of this pair of quantum numbers correspond to other values of the fine structure constant at different transfer energies or to other coupling constants. α(29, 137) for example, gives the value of the electroweak coupling constant. Visit my website to find out more about the details of the theory.

The Fine Structure Constant, named alpha ( alpha=e2/hc ) is described in detail in write-up's by Oneiromancer and ugah174.

But alpha may not be as constant as what was once thought.
This is the suggestion made by a team of Australian physicists who published their findings in the journal Physical Review Letters's August 2001 issue.

It has always been thought that alpha was a dimensionless number roughly equivalent to 1/137 and this value plays a fundamental part in our understanding of electromagnetism.
A recent study led by John K. Webb of the University of New South Wales suggests that the value might have changed over the last 10 billion years by as much as 1 part in 100,000.

Using the Keck Telescope in Hawaii, the team compared the value of alpha some 12 billion years in the past to that of today.
This involved observing light from very old quasars passing through gaseous clouds on their way to Earth. In total 49 clouds were observed ranging between 5.5 billion and 11 billion light years distant. Each cloud that the light from the quasars passed through changed the spectrum image of the light based on the elemental composition of the clouds (in particular based on the presence of metallic ions in the clouds).
If alpha remained constant then the ions in the clouds should have absorbed parts of the light spectrum in the same way that ions on earth do today.
By comparing the composition of the light from the quasars to that of light on Earth the scientists could determine what changes to alpha had occured over the last 10 billion years.
According to the study it appears that alpha has grown over time.

If this is true, that fundamental constants can change or evolve over time, it would mean that many of the principles on which orthodox science is based will have to be revisited. But no-one is wiping the slate just yet.
The team members have been quick to point out that there is a 1 in 10,000 chance that a statistical error might be present in their findings and have called for another, internationally sponsored, independant measurement to verify of refute the findings of this experiment.

The Economist: 6 April 2002
The Quasar Absorption Line Fine Structure Experiment http://www.astro.psu.edu/users/cwc/fsc.html
News in Science http://www.abc.net.au/science/news/stories/s410048.htm
Space.com http://www.space.com/scienceastronomy/generalscience/constant_changing_010815.html

Varying alpha news

The Webb team at UNSW recently announced new results, based on several months' analysis of new observations:

delta alpha / alpha = -(5.7 ± 1) × 10-6

This is extremely unlikely to have occurred by chance - about one in a million. However there is the possibility of systematic error, which has been addressed by the group in some published papers. I don't think the situation is anything like cold fusion: no-one is disputing the accuracy of the observations on which the claim is made, and no-one has come up with a source of systematic error which would explain the results. However, most physicists are cautious, just because it would be such a significant result if confirmed. It is also rather difficult to understand what sort of theory might explain such a variation, but if the experimental data hold up we will just have to live with it - and John Webb will probably get a Nobel Prize!

Varying "constants" could turn out to be a very fertile method to probe the fundamental theory. Any unified theory should predict relations between varying alpha and (variations in) other quantities, such as the ratio of the proton mass to the electron mass, mu = mp/me, and the gyromagnetic ratio of the proton, gp. If we have good enough measurements, it might be possible to distinguish between different candidate theories.

Varying mu?

Recently, an independent group of researchers based in St. Petersburg and Paris announced results hinting at a variation in mu:

δ μ / μ = (5.02 ± 1.82) × 10-5

from Ivanchik et al (2002). The statistical error is rather large compared to the possible variation, hence it does not constitute firm evidence.

Early Universe bounds and why they aren't important (yet)

As pointed out by CapnTrippy, Big Bang nucleosynthesis (BBN) and the CMB would be affected by changes in alpha. Nucleosynthesis is the theory that explains how the primordial abundances of light elements were created by nuclear reactions just after the Big Bang. Naturally, if alpha was different at that time, it would affect the abundances. But crucially, the data on nucleosynthesis are much less precise than the data from quasar absorption spectra.

Nucleosynthesis can rule out variations in alpha at the level of maybe a few percent, but the size of the variation claimed by the Webb group is much smaller, and would have no measurable effect on the nuclear reactions. Similarly, a combined analysis of nucleosynthesis and the CMB leads to a bound (C.J.A.P. Martins et al., 2002) of a few percent on the variation of alpha.

The variation in the early Universe could have been 100,000 times larger than the Webb results and it would still not have been seen in BBN or CMB. So there is no reason to require that alpha must first have varied one way, then returned "precisely" to its original value. One can however rule out models in which alpha varies much faster in the early Universe.

There are continuing efforts to improve the limits from the early Universe, but they will always be somewhat slippery since the observations don't measure alpha directly, but rather some combination of alpha with other parameters of the theory - which might themselves have been varying. This also applies to the bounds from the Oklo fossil reactor, which depend on poorly-understood nuclear physics as well as on alpha.

P.S. - Gilson' theory: His formula for alpha is very accurate, but his formula for the electroweak angle, once you go to the website and hunt it out, is not: why such a discrepancy? And what happens to the theory, which allows only discrete values, if alpha is varying?

Log in or register to write something here or to contact authors.