This is Everything2's science writing group, existing to encourage, facilitate and organise the writing and discussion of science on this site. Members are usually willing to give feedback on any writing on scientific topics, answer questions and correct mistakes.

The E2_Science joint user is here make it easier to collectively edit and maintain the various indexes of science topics: Scientists and all its sub-indexes, physics, astronomy, biochemistry and protein. More may follow. It also collects various useful links on its home node.

Note that there is also a separate e^2 usergroup for the discussion of specifically mathematical topics.


Venerable members of this group:

Oolong@+, CapnTrippy, enth, Professor Pi, RainDropUp, Razhumikin, Anark, The Alchemist, tom f, charlie_b, ariels, esapersona, Siobhan, Tiefling, rdude, liveforever, Catchpole, Blush Response, Serjeant's Muse, pimephalis, BaronWR, abiessu, melknia, IWhoSawTheFace, 10998521, sloebertje, getha, siren, pjd, dgrnx, flyingroc, althorrat, elem_125, DoctorX, RPGeek, redbaker, unperson, Iguanaonastick, Taliesin's Muse, Zarkonnen, SharQ, Calast, idan, heppigirl, The Lush, ncc05, Lifix, Akchizar, Palpz, Two Sheds, Gorgonzola, SciPhi, SyntaxVorlon, Redalien, Berek, fallensparks, GunpowderGreen, dichotomyboi, sehrgut, cordyceps, maverickmath, eien_meru, museman, cpt_ahab, mcd, Pandeism Fish, corvus, decoy hunches, Stuart$+, raincomplex, Tem42@
This group of 71 members is led by Oolong@+

The kilogram is shrinking!

No, seriously, it is. It all has to do with its definition. See, the kilogram is the only base unit in the SI measurement scheme that's based on an actual object. All of the other SI units are based on fundamental physical constants, which means they can be defined precisely, and that as measurement techniques improve, their exact values are automatically known with greater accuracy. For instance, one second is defined as "the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium-133 atom." Perhaps a bit specialized, but it's a measurement that can be repeated by anyone with the requisite equipment, and the same value will be found. Same goes with the meter — one meter is defined as the distance that light travels in a vacuum in 1/299 792 458 seconds. But the kilogram is defined as the mass of a particular metal cylinder.

The prototype kilogram is stored in a vault in Paris, along with six official copies. Individual copies, termed "national prototypes", have been made for a number of countries by the Bureau International des Poids et Mesures (BIPM), the official body that defines SI units, but clearly no exact replica can be made, and so the precision of the kilogram as a measurement — which in theory should be treated as perfect — is limited by the precision of our ability to manufacture exact copies of a small hunk of metal. Further, these hunks of metal must be treated quite carefully; the BIPM's prototype and official copies are housed in individual bell jars to prevent them from wearing away or absorbing dust or gases from the air, but they can't be maintained perfectly.

And clearly they haven't been. Over the last hundred years, the prototype kilogram has lost some mass — about 50 micrograms altogether. A small amount, to be sure, but a significant one now that scales have been developed with precisions measured in zeptograms — 10-21 kg. The national kilogram prototypes are not exactly the same mass as the original, either; they are measured against it every ten years, and they measurably differ from it. This isn't enough to offer up an excuse for putting on weight, but it is a problem when it comes to very precise measurements. Since the kilogram is defined as the mass of this particular object, that means that technically speaking, if you had an object with a mass of one kilogram a hundred years ago, and its mass has stayed exactly the same since then, it now has a mass of 1.000 05 kg. Craziness!


About the prototype

The kilogram wasn't always defined as the mass of a particular object. Originally, it was defined as the mass of one liter of water at 3.98°C at atmospheric pressure, which is where water reaches its maximum density. The trouble with that is two-fold: it's difficult to measure this precisely, as small changes in atmospheric pressure will change the density of water. Further, pressure is measured with the Pascal, which is a derived unit, defined as kg/m·s2 — thus making the definition of the kilogram circular. So it was redefined in 1889: a cylinder was created out of an alloy of 90% platinum and 10% iridium, 39 mm tall and 39 mm in diameter. And the kilogram was defined as the mass of that prototype.

The choice of materials used was quite deliberate; the platinum-iridium alloy used has a density of 21.5 g/cm3, similar to that of the densest materials known, iridium and osmium, each of which weigh in at around 22.6 g/cm3. The high density means the kilogram has a low volume, which reduces the effect of buoyancy when it is weighed in air. The small surface area reduces the impact of surface contamination of the object (it has a mirror finish to further reduce the surface area), and the metals used are relatively inert which reduces — but doesn't eliminate — the accumulation of impurities from the atmosphere.

The various national prototype kilograms are compared to the original when its deemed necessary; the six copies stored along with it have been compared with it three times, along with a large number of the national prototypes, in order to assess the divergence of their masses. National copies that are used more frequently are naturally more subject to surface contamination; a careful cleaning process is used before the kilogram prototypes are used to help reduce the problem. In that process, a piece of chamois is carefully cleaned with a mixture of ethanol and ether; afterwards, the kilogram is rubbed with the chamois and solvent. Afterwards, it is sprayed with a jet of steam, carefully inverting it to expose every surface. All and all, storing, recalibrating, and maintaining these objects is a lot of work, and it introduces errors into the process constantly; in contrast, any of the other base units can be calculated to the precision of whatever equipment is used simply by knowing a definition — but to precisely determine how much one kilogram is requires physical access to either the prototype or one of its copies, and each step involved introduces more error.


A new definition

For quite some time, then, various ideas have been proposed to define the kilogram by means of some more fundamental physical constant, ideally obviating the need for comparisons with a single reference object and enabling greater precision to be used in measuring mass. There's certainly precedent for changing these definitions. The meter was first defined, during the creation of the metric system in the wake of the French Revolution, as 1/10 000 000 of the distance between the north pole and the equator on the meridian that passes through Paris. In 1889, after the establishment of the BIPM, a prototype meter was created — a bar of platinum-iridium with two lines on it, the distance being measured at precisely 0°C. It wasn't until 1960 that a standard not based upon a physical artifact was chosen.

There are four basic schemes being kicked around to accomplish this goal, and they all boil down to precisely tying the kilogram to either Planck's constant or Avogadro's number. Technically, all of these projects would involve attempting to precisely measure those two constants in relation to the current kilogram, to the limit of our ability to do so, and then setting precise values for those constants based upon those measurements. An exact definitional value for one of those constants in terms of the kilogram (indirectly in the case of Planck's constant, which doesn't directly involve mass) would conversely mean a definition of the kilogram in terms of a precise, unalterable fact describing the universe.

The Avogadro Project

The Avogadro Project aims to precisely determine Avogadro's number — defined as the number of atoms in 12 grams of carbon-12. Avogadro's number is an important concept in chemistry; it's the number of atoms or molecules in one mole of a substance, and the current best estimate for it is 6.022 135 3 × 1023. So in order to count the number of atoms in a mole of a substance, the Avogadro Project measures the precise volume of a one kilogram sphere of silicon, using silicon balls very carefully manufactured to be the roundest objects on earth. X-ray interferometry is used to determine the distance between lattice planes in the silicon crystal, permitting physicists to determine, as closely as possible, the number of atoms in these spheres. Currently, a measurement accuracy of one part in 107 is possible, after considering all of the various sorts of error introduced in the process, but it is hoped that ten times this accuracy will be possible within five years.

The Watt Balance

The Watt Balance project has provided the most accurate measurements so far. It involves suspending a coil and a one kilogram weight on opposite sides of a balance. The coil is placed in a magnetic field, and electricity is run through it. By running a current through the coil, a force is generated to counterbalance the force produced by gravity acting upon the weight on the other side. Next, the coil is moved at a constant speed, producing a measurable voltage. Thus, first an electrical "realization" of the watt is produced, and then a mechanical one. These two realizations can be related via Planck's constant, thus producing a highly accurate measurement of the number and relating it to the kilogram.

Superconducting levitation

This method works along essentially the same principles as the Watt Balance. In it, a superconductor of a known mass is placed within a superconducting coil. By running current through the coil, a magnetic field is generated that causes the superconducting mass to levitate. By levitating it at different positions and measuring the current required to do so, the magnetic flux can be calculated. Magnetic flux relates directly to Planck's constant, and because the force generated by the magnetically-induced levitation and the downward force of gravity must be equal, Planck's constant can thus be precisely related to the kilogram. The accuracy of this approach is about one part in 106, making it a less likely candidate than the Watt Balance approach. Fortunately, though, the values generated by the two experiments are similar, which suggests that these approaches are working.

Ion accumulation

Much more speculative than the three approaches listed above, the ion accumulation approach involves shooting a beam of ions — likely gold-197, though other materials can be used — at an electrode. These ions have to absorb electrons in order to become neutral again, and the amount of current spent doing so can be measured. This is a slow process — about 1.8 grams of gold can be accumulated in a day's time, which means the experiment must run for about six days to achieve the target of ten grams. By determining exactly how many electrons were absorbed in neutralizing the ions and weighing the final product, theoretically the exact number of gold atoms transmitted can be counted. Current experimentation has yielded an error of about 1.5%, making it vastly less accurate than other approaches at present.


So why are we waiting? Friggin' do it already, jerks!

The most accurate and widely-discussed efforts to define the kilogram are the Avogadro Project and the Watt Balance approach. Unfortunately, at present, their results differ from one another by about one part in 106. That means that current efforts still can't match the accuracy of the current system of reference kilograms, generally regarded as mutually accurate to within one part in 108. While it's aesthetically displeasing it is to depend upon the mass of a physical object as a definition of a fundamental unit of measurement, and while it is obvious that this definition isn't reliable over the long term, the definition of the kilogram produced by experimentation is still not as precise, and moving to a less accurate standard is obviously not desirable.

Clearly, though, a definitional value for Avogadro's number or Planck's constant could be set. That way, the kilogram would be defined in relation to a physical constant of the universe — and more precise measurements of the kilogram could be obtained immediately as progress advances in these two main efforts. The problem is that in the interim, there would be a definition for the kilogram but no one would actually know exactly how much a kilogram is! Of course, it's relatively rare that anyone measures anything within an accuracy of one part in 108 anyway; the scale at my grocery store probably doesn't measure my late-night purchases of bulk gummy worms to an accuracy of even one part in 103. Even medical science doesn't depend on this level of certainty for measurement. Mostly, this is a question of concern to physicists; having a more precise value for the kilogram would make quite a few other values considerably clearer for their purposes.

The definition of the kilogram is a major topic in the world of metrology (that is, the science of making measurements), even if it's not terribly relevant to most people. A meeting of the BIPM to continue discussing current research is planned for 2007; most likely, no new definition will be chosen, however, until these research projects are accurate enough that definitions can be formulated that simultaneously retain the current standard value for the kilogram while also relating it to a more fundamental physical phenomenon.


References

General information on the Système international d'unités taken from the BIPM website. (http://www.bipm.org/)
Girard, G., 1990. "The Washing and Cleaning of Kilogram Prototypes at the BIPM". (http://www.bipm.org/utils/en/pdf/Monographie1990-1-EN.pdf)
United Kingdom National Physics Laboratory, 2006. "Avogadro Project". (http://www.npl.co.uk/mass/avogadro.html)
Australian Centre for Precision Optics, 2006. "The Avogadro Project". (http://www.tip.csiro.au/IMP/Optical/avogadro.htm)
Eichenberger, A., Jeckelmann, B., and Richard, P., 2003. Metrologia. "Tracing Planck's constant to the kilogram by electromechanical methods".
BIPM, 2006. "The Principle of the Watt Balance". (http://www.bipm.fr/en/scientific/elec/watt_balance/wb_principle.html)
United States National Institute of Standards and Technology, 2005. "NIST Improves Accuracy of 'Watt Balance' Method for Defining the Kilogram". (http://www.nist.gov/public_affairs/releases/electrokilogram.htm)
Physikalisch-Technische Bundesanstalt, 2004. "Ion Accumulation Experiment". (https://www.ptb.de/en/org/1/12/124/ionenex.htm)
Mills, Ian M. et al., 2005. Metrologia. "Redefinition of the kilogram: a decision whose time has come".
Kestenbaum, David, 1998. Science. "Standards: Recipe for a Kilogram".
United Kingdom National Physics Laboratory, 2006. "Frequently asked questions — mass and density". (http://www.npl.co.uk/mass/faqs/kilogram.html)
I'm not sure why I love gummy worms so much.

The Australian cottony cushion scale insect, a citrus pest, was introduced to the United States in 1868. Much later, DDT was used in an attempt to eradicate this pest, yet seemingly paradoxically the scale population increased. In fact, the pest was already partially controlled by a predator - the ladybird - and the indiscriminate use of the pesticide had removed this natural check on scale numbers in accordance with Volterra's principle:

An intervention in a prey-predator system that removes prey and predators in proportion to their population increases prey populations.

Which, considering the scale insects as prey, the ladybirds as predator and DDT as the intervention, explains the observed increase in scale population.

As phrased, the principle illustrates how increased intervention favours prey levels; thus reduced intervention favours predators. This was the result Volterra, a post-first world war Italian mathematician, had been attempting to explain: during the war, the proportion of predatory fish caught in the Adriatic sea had increased, due to a decline in fishing (the intervention).

Volterra's principle is fairly simple to capture mathematically, modelling the interaction within a prey-predator system by what have become known as Lotka-Volterra equations. Letting U be the number of prey and V the number of predators, we can formulate the conservation equations in words as follows:

Rate of growth of U = Net rate of growth in the absence of predation - rate of loss due to predation - rate of loss due to fishing
Rate of growth of V = Rate of growth due to predation - rate of loss in absence of predation - rate of loss due to fishing

Which gives rise to equations of the form

dU/dt = αU -γUV -pEU
dV/dt = eγUV -βV -qEV

Where α,βγ,e are positive constants: α denoting the growth rate of the prey population; -β the growth rate of the predators (negative, indicating decline without prey- this model assumes an exponential decay); γ the decline in prey due to predation (with a linear relation between the quantity of prey and the level of predation); e the increase in predators due to successful predation. Further, E indicates the amount of effort invested in fishing, with p and q being positive quantities indicating the 'catchability' of each type of fish; again the catch is asssumed to increase linearly with effort.

We thus have a two variable system of differential equations. The equations describe how the variables change with time; we describe a constant state as a steady state. Thus, to be a steady state, the pair of population values (U,V) must be unchanging- that is, each of dU/dt and dV/dt, their rates of change, must be zero:

dU/dt = αU -γUV -pEU=0
dV/dt = eγUV -βV -qEV=0

Or, factorising, a population pair (U*,V*) will be a steady state if

U*(α -γV* -pE)=0
V*(eγU -β -qE)=0

A trivial steady state is thus given by (0,0)- obviously if there are no fish of either type, then this will continue to be the case. Seeking a non-trivial case, we can safely assume that prey levels are non-zero, and thus from the first of the above two equations we conclude (α -γV* -pE)=0. Rearrangement gives V*= (α-pE)/γ, which by the second equation leads us to a value of U*=(β+qE)/eγ .

By inspection, therefore, evaluating these expressions for greater values of E gives greater values of U* and lesser values of V*- it also emerges that the model is only biologically realistic if E < α/β else it predicts negative predator levels!

Moreover, we may compute the ratio of prey to predators caught at steady state:

R=pEU*/qEV* = (p(β+qE)γ)/(eγq(α-pE))

Which, giving rise to Volterra's principle, is an increasing function of E.

Limitations

The above analysis describes the effect of varying the level of intervention in a system at steady state- and thus implicity assumes that such a steady state has been reached. However, it is entirely possible for dynamical systems such as those governed by Lotka-Volterra equations to have a steady state which is not asymptotically stable- that is, the population values are free to oscillate close to the steady state rather than inevitably settling down. However, all is not lost should these oscillations turn out to be periodic- repeating in a predictable way over time. Then, further analysis confirms that Volterra's principle holds for the mean population values, i.e., by averaging to take account of the fluctuations. Is the assumption of periodic oscillation itself reasonable? Often it holds in the lab, but rarely in the field- although data for the interaction of hare/lynx populations provide a real-world example of such behaviour, along with the examples already discussed.




Reference: MA30047 Mathematical Biology 1, University of Bath Mathematics department.

Because of the phenomenon of libration, only 41% of the moon is actually always hidden from observers on Earth.

If you were to sit on the surface of the sun and look at the Earth through a telescope, you would observe it rotating both around the centre of the solar system and around its own axis. The side of the sphere facing you (the side experiencing daylight) would consist of a constantly shifting selection of Earth’s surface as it rotates: with new areas becoming lit in the west as sections in the east fall into shadow. You could watch as the eastern seaboard of North America came into illumination, then passed back into darkness as it spins away to the shadowed side of the planet once again.

By contrast, when you look up from Earth at the moon, the same face is basically presented all the time. Just as one side of the moon is always in view, there is a ‘dark side’ that is always hidden from the vantage point of an observer on Earth. This is because of a phenomenon called tidal locking. The moon rotates on its own axis at just the right rate so that, as it orbits the Earth, the same side is presented. There are, however, minor oscillations in this presentation. This is called libration, which derives from the Latin word meaning ‘to sway.’ You can see an animation of the phenomenon here: http://www.photoastronomique.net/geant/041200.html . It derives both from the fact that the moon’s axis is slightly inclined when compared to its orbit around the Earth and because the moon’s orbit around the Earth is slightly eccentric. Because of the cumulated rocking motion, it is actually possible to see 59% of the moon’s surface from the Earth.

This node has been modified from a post on my blog, at: http://www.sindark.com/2006/09/02/only-41-of-the-moon-is-dark/

Quite simply put, the Coulomb force is the force between charges. It was first established by Charles Augustine Coulomb1 in 1785. Coulomb established that the force F diminished with the square of the distance r, or

F = k r-2

with k a constant. This is exactly the same as the relation for gravity. Some people have bothered to find out whether this 2 is exactly 2, or almost 2, and the result of very accurate measurements was that differs from two by less than 1 parts in a billion.2

The next step is to find this constant k, which depends on the charge of the objects. The total form of the equation now reads

F = (q1 q2r-2)/(4 π ε0)
with ε0 the permittivity of vacuum and q the charges. The second term between brackets has a magnitude of 9.0x109 N m2 C -2 The form of the equation is again the same as the equation describing gravity. There are, however, two important differences:
  • Firstly, the Coulomb force is either repulsive (like charges) or attractive (unlike charges), while the gravitational force is always attractive.
  • Secondly, the constant in the Coulomb force is much higher than that in the gravitational force. This means that charges of 1 coulomb-roughly the total amount of charges in a tenth of a milligram of matter-are more than 20 orders of magnitude stronger than the gravitational attraction between 2 objects of 1 kg at the same distance.
In practice, the positive and negative charges are tightly bound, due to the large magnitude of the Coulomb force. Charges of 1 millionth of a Coulomb are already considered large. Hence, the long-distance interaction between matter is dominated by the gravitational force, while at short distances, the Coulomb force rules supreme. Indeed, it is the Coulomb force keeping ordinary objects, such as you, together.

Physically, the Coulomb force is a special case of the general electromagnetic force. The Coulomb force describes only the electrostatic part of this force. This means it is only completely valid if all the particles are standing completely still. However, it is still a good approximation if the speeds involved are much smaller than the speed of light. This last condition is often satisfied, making the Coulomb force a very useful approximation in practice.

  1. http://scienceworld.wolfram.com/physics/CoulombForce.html
  2. http://en.wikipedia.org/wiki/Coulomb's_law

Occasionally a science article becomes popular in the general populace for a week or two. Usually this occurs because it has pretty pictures or makes sensational claims. In May of 2006 the latter sort of story appeared which raised the possibility of human and chimpanzee ancestors continuing to mate after speciation had occurred1. Most people unwillingly conjured up images of modern humans and chimpanzees mating, which is a shame, because the notion reveals a hole in our common conception of the notion of a species, and provides an important clue to the full story of how humans came to be humans.

The chimpanzee/human mating story goes like this: isolated populations of what would later become chimpanzees and humans evolved into separate species, then creatures from those species interbred, and the resulting hybrid species again separated and offspring from the two different lines eventually evolved into chimpanzees and humans (among other creatures). The most compelling argument for this theory is the fact that the chromosome of ours that diverged from chimpanzees most recently is the X chromosome -- and since X is most important for reproduction we would need a compatible X chromosome more recently in order to interbreed. If this theory is correct (and I think the evidence is strong that it is) then most of our ancestors would have diverged from chimpanzees at an earlier date, with a few chimpanzee grandparents sneaking in every ten or hundred thousand years. The timeline is so vast that it is hard to imagine pre-humans not breeding with pre-chimpanzees occasionally, as long as it was still biologically possible.

We should prepare ourselves for more and more of these sorts of stories: evolutionary convergence as well as the divergence traditionally associated with Darwinian speciation. The reason that this is not already a common view is the “species” problem. A species has traditionally been defined as a group of organisms that can interbreed. This rough definition does not adequately address anomalies like this hybridization process, which is found fairly frequently in the animal kingdom: polar-grizzly bears, lion-tigers, bison-cattle, etc. This is because a species is not a real thing that exists, like a person or a rock or a computer is, a species is merely an arbitrary label that people use to describe certain groups of organisms. As such it has only scholastic meaning. Humans and chimpanzees did not separate and interbreed. A group of creatures interbred in Africa for a few million years in distinct groups that occasionally intermingled. Eventually they stopped intermingling and the descendents of one group became creatures we call humans and descendents of another group became chimpanzees (there were probably other groups too, judging by all of the primitive hominid fossils we’ve found in Africa, but they have no modern survivors).

This pattern of divergence and convergence lends credence to an alternative explanation of human evolution currently challenging the “Out of Africa” hypothesis (Not Out of Africa2) -- regional continuity. The current theory holds that humans completed most of their evolution into the modern homo sapiens in Africa, then migrated out of the continent and diversified slightly (white skin in Europe, hairlessness in Asia, etc.) where they eventually settled. Alan Thorne’s new theory holds that creatures we now consider pre-humans, such as homo erectus, ought to be considered members of the human species -- humans that left Africa much earlier and evolved independently into modern humans in the various regions in which they settled. Migratory patterns and occasional meetings between these diverging tribes could have brought them into contact with each other every couple of hundred or thousand years and highly advantageous new mutations (such as a slightly larger brain size) could then be passed throughout the species all over the globe. In essence we evolved all over the globe together, almost simultaneously in geological time. And Thorne has powerful evidence on his side: a 60,000 year old modern human found in Australia, much earlier than it should have been there under the older theory.

New evidence from another place fits in with both “Out of Africa” and “Regional Continuity”, but it increases the possibility that Thorne is right. A study analyzing regional variations of the human genome, studying Europeans, Africans, and Asians, found a wide variety between regions for genes controlling phenotype variations that are especially helpful in differing environments (for example, adult lactose tolerance in Europeans or malaria resistance in Africans)3. These genetic divergences are only about 10,000 years old, implying a common genetic pool before that (this also coincides with the explosion of the human population following the last ice age). This indicates if humans did complete most of their evolution in Africa and then expand outward, settle and evolve independently only these traits, they must have done so very recently. The problem is, Thorne has a 60,000 year old skeleton that argues otherwise.

Human ancestors probably left Africa a lot earlier than we currently think they did, and they probably looked a lot less like modern humans than we think they did. However, they retained the ability to interbreed, and the evidence of human/chimpanzee hybridization, along with the known tendency of hominids (early and modern) to migrate more frequently than other animals means that our ancestors evolved into us globally in sync with each other.


1. http://www.newscientist.com/channel/being-human/mg19025525.000.html
And
http://www.boston.com/news/nation/articles/2006/05/18/humans_chimps_may_have_bred_after_split/

2. http://www.discover.com/issues/aug-02/features/featafrica/

3. http://www.seedmagazine.com/news/2006/03/human_genome_still_evolving.php