This is Everything2's science writing group, existing to encourage, facilitate and organise the writing and discussion of science on this site. Members are usually willing to give feedback on any writing on scientific topics, answer questions and correct mistakes.

The E2_Science joint user is here make it easier to collectively edit and maintain the various indexes of science topics: Scientists and all its sub-indexes, physics, astronomy, biochemistry and protein. More may follow. It also collects various useful links on its home node.

Note that there is also a separate e^2 usergroup for the discussion of specifically mathematical topics.


Venerable members of this group:

Oolong@+, CapnTrippy, enth, Professor Pi, RainDropUp, Razhumikin, Anark, The Alchemist, tom f, charlie_b, ariels, esapersona, Siobhan, Tiefling, rdude, liveforever, Catchpole, Blush Response, Serjeant's Muse, pimephalis, BaronWR, abiessu, melknia, IWhoSawTheFace, 10998521, sloebertje, getha, siren, pjd, dgrnx, flyingroc, althorrat, elem_125, DoctorX, RPGeek, redbaker, unperson, Iguanaonastick, Taliesin's Muse, Zarkonnen, SharQ, Calast, idan, heppigirl, The Lush, ncc05, Lifix, Akchizar, Palpz, Two Sheds, Gorgonzola, SciPhi, SyntaxVorlon, Redalien, Berek, fallensparks, GunpowderGreen, dichotomyboi, sehrgut, cordyceps, maverickmath, eien_meru, museman, cpt_ahab, mcd, Pandeism Fish, corvus, decoy hunches, Stuart$+, raincomplex, Tem42@
This group of 71 members is led by Oolong@+

Mites are arachnids, ( that is, spiders and their relatives) of the Order Acari or Acarina, among the smallest members of the spider family.

They are also the most widely distributed, the most diverse in form and habit, and, because they carry disease, the most dangerous to human beings. More than 30,000 species have been described, but many authorities think that there may be as many as 500,000 species as yet to be discovered. They have been recorded as high as 5,000 meters on the slopes of Mount Everest and as deep as 5,200 meters in the northern Pacific Ocean. More than 50 terrestrial species are known from the Antarctic. A few mites have been found drifting high in the atmosphere. Most species are free-living, that is, not parasitic on other life forms.

Everyone knows what they look like: the undivided and unsegmented body, with eight legs and sucking mouthparts. All are fluid feeders. Some suck animal or plant tissue fluids; others liquefy solid food by injecting enzymes. There are separate males and females, though some species can reproduce by parthenogenesis; most hatch from eggs. Most have one or more immature stages, during which they may have six or even four legs.

The ones we have problems with are the parasitic species, especially the ticks, relatively large mites (2 to 6 mm) which suck blood from vertebrates, including ourselves. Besides being unbelievably disgusting (finding a huge, blood-gorged tick with its head embedded in one's own body has few parallels for sheer ugliness) these mites carry diseases: viruses (encephalitis, Colorado tick fever), rickettsias (Rocky Mountain spotted fever, scrub typhus, Q fever), spirochetes (relapsing fever and Lyme disease) and sporozoans (babesiasis). Cattle, sheep and horses can also be infected with diseases by ticks, who may also carry tapeworms. Mites have also decimated domestic honey bees.

They attack our plants too. No indoor gardener is ignorant of the dreaded spider mite, and food crops are also attacked by various mite species. Further, there are grain mites, sugar mites, and others, which attack our stored foodstuffs.

On and on. Mites in household dust are a major cause of dust allergies. The "chiggers" of the southern United States are immature stages of mites. The skin disease scabies is caused by a mite which completes its entire life cycle burrowed into the skin, and is passed from person to person by contact. Mange on domestic animals is caused in a similar fashion.

The other side of the story

Not to be too self-centered, we have to remember that the vast majority of mite species have little to do with us or our plant or animal dependants. Some are beneficial.

  • Enormous mite populations are present in leaf litter and soil, and play an important role in that ecology. Mites are the most numerous arthropods in soil. They feed on plant and animal material, and are probably the most important single animal (except for the microfauna) in the breakdown of organic material. This function alone undoubtedly outweighs all the harm mites do us otherwise.

  • Cheese mites are common in stored food, damp flour, old honeycombs, and insect collections. Mite-infested cheese will be more or less covered with a grey powder. There are some European cheeses into which a culture of cheese mites is deliberately introduced. Altenburger cheese is one. The mites are said to impart a characteristic "piquant" taste. When the cheese is covered with the tell-tale greyish powder, consisting of enormous numbers of living and dead mites, cast skins, and feces, it is then ripe and particularly valuable.

PHYLUM: Arthropoda, SUBPHYLUM: Chelicerata, CLASS: Arachinida, ORDER Acarina
Resources:
Pearse, Buchsbaum, Living Invertibrates, Blackwell Scientific Publications, Boston, Massachusetts 1987;
Britannica;
Wonderful color pictures by David Walter, University of Queensland, at http://www.uq.edu.au/entomology/mite/mitetxt.html (November, 2002)

Outer space has been a lifelong interest to me, and I recently decided to take on some short science courses in planets and astronomy at Britain's Open University. Faced with the complexities of cosmology and gamma-ray, infrared, radio, ultraviolet and X-ray astronomy, as well as the sheer vastness of the universe, with its billions of stars, galaxies, nebulae and other objects, I soon decided my main focus would be the solar system itself: Earth's back yard, so to speak.

I thought I would be on fairly safe ground in memorising the nine planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and of course, Pluto. As my studies continued, however, I began to discover about trans-Neptunian objects (TNO)s, which orbit the Sun at a greater average distance than Neptune. Clyde Tombaugh discovered Pluto in 1930. If he had discovered it today, it would have been classed as a TNO, not a planet. Since 1992, over a thousand such objects have been found, and Pluto came under increasing pressure to forego its planetary status, saved only by the fact that it was larger than any then-discovered TNOs.

In January, 2005, Michael Brown discovered 2003 UB313, which had been overlooked in routine observations in 2003. He gave it the informal nickname "Xena", although it was later named Eris by the International Astronomical Union (IAU). The Hubble Space Telescope measured its diameter as 2400km, larger than Pluto, in April 2006, and suddenly Pluto's status was under threat. Either 2003 UB313 became a tenth planet, or Pluto lost its own status. The IAU was forced to decide upon a formal definition of the term "planet" to clear up the controversy and confusion, and the decision was announced on 24th August, 2006. The definition meant that Pluto was no longer a planet, and that the eight other planets will be considered the "classical planets". Pluto, 2003 UB313, and 1 Ceres (previously classified as an asteroid) will be known as "dwarf planets".

In July, 2015, NASA's New Horizons mission performed a close fly-by of Pluto and its system of moons. An initial press conference on 15th July showed two exciting images of Pluto and its largest moon, Charon, raising intriguing questions already. No doubt more questions and answers will be raised as the mission continues. A heart-shaped region visible on some of the first images of Pluto has been named Tombaugh Regio after the world's discoverer. Young surfaces on the Pluto image show no visible craters. Charon has a 4-6 mile deep chasm which reminds me of the Valles Marineris on Mars.

There are often a number of different levels of description, or explanation, at which we can look at any given problem. In many cases each level of explanation makes a lot more sense when you are aware of one or more of the more fundamental ways of describing a phenomenon. Hence the basic laws of chemistry follow very naturally from a physics-based description of atoms, molecules and so on; much of biology becomes comprehensible only when you start to understand the chemistry that it is based on; psychology starts to make more sense once you understand a bit about the biology of the brain; theories of economics that take into account realistic ideas about human psychology have started supplanting simpler theories that abstract people as rational self-interested actors, and so on.

One perspective on the relationship between different levels of explanation is reductionism - the idea that chemistry is really just applied physics, biology is just applied chemistry, and so on. It's a very powerful approach which has given scientists a great many fresh insights over the centuries.

In practice, however, each new layer of description tends to include approaches and concepts which don't seem to follow from the previous layer in any obvious way. It is true for example that the whole periodic table follows very naturally from quantum physics, but if you insisted on always looking at things from a strictly physics-based perspective, you would probably miss an awful lot of interesting chemistry, and make a great deal of extra work for yourself in the process. Part of the reason for this is emergence - the tendency for behaviour to manifest as a result of known laws which may well be deterministic in principle, but from which, in practice, nobody would have predicted that behaviour without exceedingly careful consideration.

Having a description of a phenomenon in terms of the more basic things that give rise to it is satisfying, and sometimes has great practical implications, but there is also a danger of being led astray by applying what is supposed to be a more fundamental level of description to things which are actually better explained at a higher level. It might be technically accurate, for instance, to say that a violent incident occurred thanks to an excess of adrenaline and cortisol in the assailant's brain, but it would probably be a lot more helpful to say that the fight occurred because the other guy was going out of his way to wind him up. Interestingly, it has been shown that people tend to rate an explanation of some observed behaviour as a great deal more convincing if it is accompanied by a bit of neuroscience, even if the neuroscience in question is completely irrelevant (link).

This difficulty in choosing the best level of explanation to work on has extremely important practical consequences, largely because your perspective on a problem constrains the sorts of strategies you are likely to consider for dealing with it. If you view depression as a 'chemical imbalance', the obvious thing to do is to try re-balancing the chemicals in someone's head. If you take a neurophysiological perspective, you might be more interested in what structures and systems within the brain might be leading to undesirable levels of those neurotransmitters occurring in the patient's brain on an ongoing basis. If you see it through the lens of psychology, you are likely to want to try looking for a psychological fix, perhaps trying to see if there are root causes in their life events or the attitudes they take to them that might be made better. A sociological perspective would look at the social relations between the person and those around them - perhaps they are unhappy because modern society lacks the structures that would fulfil their needs.

The different levels of description feed into each other, sometimes in very subtle ways, and it is not always easy or even desirable to pick them apart. Neither does the applicability of one level of explanation imply that solutions based on quite different perspectives will not also be helpful. Attacking symptoms can be very valuable, after all, and while aiming to fix things at root might be the best long-term solution in principle, it only works if you can both correctly identify and do something about the basic problem, so it is not always clear where our efforts are best spent.

The sheer quantity of learning available to modern science makes it very difficult for researchers to get a handle on every possible angle on a problem - there is an inevitable tension between specialisation and interdisciplinarity, closely related to the tension between reductionism and holism. We might like to see all of nature in terms of an overarching scheme or pattern of organisation, and historically many scientists and philosophers have been driven - sometimes fruitfully, sometimes misleadingly - by the urge to reveal such a scheme. However, while there are often lessons to learn and principles to apply when shifting between perspectives on the world, it now seems very unlikely that any detailed 'theory of everything' will manage to be a theory of music and economics as well as subatomic particles and things moving through space.

The availability of different levels of description has a tendency to lead people astray when it is not obvious how to marry up the different levels, which has notoriously been the case when it comes to the mind/body problem. We can describe the brain (at least somewhat) and we can describe the mind (as we experience it), and it's clear that they're connected in some way, but the specifics of how one might give rise to the other are really not at all obvious, leading some commentators to insist that they must be fundamentally different sorts of thing. In philosophical terms, this is confusing ontology with epistemology - drawing conclusions about what actually is from what we seem to be able to know. It's a seductive trap, that people fall into in all sorts of contexts - 'I can't imagine how this could possibly happen, therefore it can't possibly happen.' Science has repeatedly shown that the unimaginable can become plausible - and ultimately unavoidable - when new ways of looking at things present themselves. I believe that cognitive scientists and others are steadily closing this particular explanatory gap, but navigating the different levels of explanation available for mental/neurological phenomena will be a problem of practical as well as philosophical interest for many years to come.

Further reading: The best discussion of this that I've come across is in Douglas Hofstadter's 'I am a Strange Loop', mainly in chapter 2, although I hadn't seen that when I wrote this. Another few months later, I came across this paper by Uri Wilensky and Mitchel Resnick on the topic.

Benford's law is a remarkable law of text to do with the occurrence and distribution of numbers in a body of text or set of data.

The law states that if you take a corpus of text and look at the number of occurrences of the first digit of every number in that text, the distribution is roughly exponential.

More precisely - for any substantial body of text, numbers beginning with the digit 1 will occur roughly twice as many times as numbers beginning with the digit 2. Which in turn occur roughly twice as many times as numbers starting with the digit 3. This continues all the way down to numbers beginning with the digit 9.

In most cases the difference in frequency between two consecutive digits is actually around 30% (not half), but this it is still significant enough such that numbers starting with the digit 9 only occur in around 5% of all instances.

At first this law seems very counter-intuitive. The natural assumption is that all digits should occur with around the same frequency and that their distribution should be even. The other remarkable thing about this law is that not only does it fit random corpus of text, but it can be applied to most types of more formal data, even that which is logical, controlled and sequenced. This includes data such as electricity bills, stock prices, lengths of rivers and even mathematical constants.

The reason for this law is the overwhelming occurrences of exponential and recursive systems in nature. One example is population growth, which can be modelled recursively and so total population fits an exponential curve. If we consider bacteria, each bacteria divides into two bacteria, which in turn divide into four, and again into eight - the population continues to grow faster and faster. The reason Benford's law is so prevalent is that these kind of systems pop up everywhere in nature, even when we don't expect them. They were noted as far back as 1881, when the American astronomer Simon Newcomb (sometimes noted for discovering Benford's law) noticed that in the pages of logarithm books, the earlier pages which contained numbers starting with 1 were more worn than the other pages.

With any system where numbers grow exponentially, due to the way our number system works, you will see that the system spends most of its time at a point where the current total begins with the digit 1 or 2. This might be hard to envision, but it becomes clear if you look at logarithmic scale. In a logarithmic scale (of base 10) the distance between 1 and 2 is equal to the distance between 10 and 20, and also slightly larger than the distance between 2 and 3 (or 20 and 30). As an exponential curve can be plotted to a logarithmic scale such that it creates an even distribution, this shows that for the corpus or data set to be valid, the data it is modelling must be exponential.

What I like about Benford's law is that it shows how prevalent exponential and recursive processes are in all aspects of nature. It shows clearly that linear systems happen far less than we naively expect. It also shows that the very intuitive notions we have of perceiving the world (such as the idea that the mean is always the fairest average), can often be flawed, skewed, or not really tell us what we think they do.

There are many myths about global warming running around, too many to address completely in a single node. Two selected myths - one general, and one highly specific, are addressed here.

The Impact of CO2

Carbon dioxide directly contributes a very small fraction of the greenhouse effect. Water contributes far far more. Of course, for global warming to arise from the greenhouse, there needs to be a change in the greenhouse effect. Does the added carbon dioxide contribute a significant amount of the change? Not directly. But indirectly, yes, it does. This is because of the water cycle.

Water evaporates, forms clouds, rains or snows down, and evaporates again in an endless cycle. In cold places, the balance of this cycle lies heavily in the solid state - ice. In warmer places, it favors the liquid and gas phases more, and in the hottest places, the capacity for water in the air is very high. This gas-phase water is what contributes to global warming. Not even so much clouds, as they increase albedo, but simple high humidity. Not relative humidity, but absolute humidity.

The water cycle is reasonably stable. If you just throw a bunch of water in the air, it will eventually condense out. The same applies to intermittent heating or cooling.

If, however, there is a steady change in the temperature - say, rising a little bit (from any source of climate forcing), more water gets into the air and stays there. Water, as noted above, is a greenhouse gas. So the additional water added to the air in turn warms things up, which in turn adds more water, and so on. How far does it go? If each kilo of water added to the air is directly responsible for adding r more kilos of water, then the 'generations' of this process form a geometric series. If r is less than 1, it doesn't run away completely, and the total change from an initial F forcing will be F/(1-r).

r can be very close to 1, however, without causing this runaway. If it is 0.99, then the total forcing will be 100F, and 99F of that will be from water. But without the 1F to start things off, the 99 F from water wouldn't be there. This is a classic case of amplification.

Now, the real climate is more complicated than that, but the general concept applies. A small forcing from carbon dioxide, or methane, or changes in solar input, or albedo changes, is amplified by the water cycle to much greater significance.

I'm not a climatologist, I'm a physicist. This isn't a statement of "this is how it is, in detail"; this is a statement of "here's an effect you have to be aware of if you're going to understand what's going on here". But this should be enough to alleviate the confusion of these small numbers producing these large effects.

The Notorious Climategate Email concerning "Diagram for WMO Statement"

The relevant portion of this email is: I've just completed Mike's Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd (sic) from 1961 for Keith's to hide the decline.

If one knows what the data set is, and what the decline is, then this statement presents no ethical issues at all.

The data is a bunch of tree ring specimens. Under certain specific conditions - principally on mountains - the growth of the trees is based on temperature reasonably tightly - enough that if you average a bunch of them it's a worthwhile measure of temperature.

Now, some but not all tree ring samples, after holding tightly to the thermometer-based record for a long time, suddenly diverged in a downward direction in the 1960's. Only certain tree samples were appropriate measures of temperature anyway, so it's understandable that these trees could have a new limiting factor on their growth arise, just like most other trees in the world. Contrary to some claims, the thermometers showed no such decline, it was just the tree rings. There are (as far as I am aware) no solid explanations for why these particular trees diverged at that particular time. A suspicion put forth by climatologists is a shift from temperature to water-limited growth (3). I would also suspect fallout from nuclear tests, and consider the possibility of some other artifact of human interference that only started at that time. Regardless of the cause of the decline, be it systematic or random, if you're drawing a smoothed trendline (as they were) it's a really bad thing if an endpoint is an outlier - it has an oversized effect on the trajectory of the curve.

Since we know what the actual temperatures were in that time period - we have had thermometers for a time span substantially exceeding 50 years - the proposal was, in a graph, to have one of the data sets for sourcing a trendline in a graph not be 'tree ring temperatures', but 'tree ring temperatures prior to 1960, thermometer measurements after 1960'. The resulting trendline is then cut off at 1960.

That was the 'trick' used in the paper mentioned, and when used in that paper, it was explained at the time so no one was under a false impression. After all, the author and recipient of this email both knew what the trick was, and attributed it to the one who devised it!

All in all, what was being hidden was misleading non-data, and the fact that this was being hidden was not going to be hidden.

~~ Resources ~~

  1. Use of tree ring samples as a gauge of temperature: Chapter 5 of Biotic Feedbacks in the Global Climatic System: Will the Warming Feed the Warming? Available free from books.google.com
  2. An early article about tree ring temperatures: http://www.agu.org/pubs/crossref/1995/95GB00321.shtml (not free)
  3. A more recent article, specifically addressing the decline, and freely accessible, is at http://www.clim-past-discuss.net/4/741/2008/cpd-4-741-2008.pdf
  4. The article which first used the trick: Nature 392, 779-787. At http://www.nature.com/nature/journal/v392/n6678/abs/392779a0.html (not free)