This is Everything2's science writing group, existing to encourage, facilitate and organise the writing and discussion of science on this site. Members are usually willing to give feedback on any writing on scientific topics, answer questions and correct mistakes.

The E2_Science joint user is here make it easier to collectively edit and maintain the various indexes of science topics: Scientists and all its sub-indexes, physics, astronomy, biochemistry and protein. More may follow. It also collects various useful links on its home node.

Note that there is also a separate e^2 usergroup for the discussion of specifically mathematical topics.


Venerable members of this group:

Oolong@+, CapnTrippy, enth, Professor Pi, RainDropUp, Razhumikin, Anark, The Alchemist, tom f, charlie_b, ariels, esapersona, Siobhan, Tiefling, rdude, liveforever, Catchpole, Blush Response, Serjeant's Muse, pimephalis, BaronWR, abiessu, melknia, IWhoSawTheFace, 10998521, sloebertje, getha, siren, pjd, dgrnx, flyingroc, althorrat, elem_125, DoctorX, RPGeek, redbaker, unperson, Iguanaonastick, Taliesin's Muse, Zarkonnen, SharQ, Calast, idan, heppigirl, The Lush, ncc05, Lifix, Akchizar, Palpz, Two Sheds, Gorgonzola, SciPhi, SyntaxVorlon, Redalien, Berek, fallensparks, GunpowderGreen, dichotomyboi, sehrgut, cordyceps, maverickmath, eien_meru, museman, cpt_ahab, mcd, Pandeism Fish, corvus, decoy hunches, Stuart$+, raincomplex, Tem42@
This group of 71 members is led by Oolong@+

Roughly speaking, statistical mechanics is the physics of many particles. It is extremely useful for the following reason:

Newton's laws, the basic equations of classical physics, can be solved easily and exactly for the case of two interacting particles. With some effort they can be solved for three particles. As of yet, there exists no general solution for the four-body problem. So what happens when we want to characterize systems with 1023 particles? The "brute force" method of computer simulation even fails us now. So instead of asking a computer what to do, perhaps we should ask a casino owner or political pollster.

In other words, we're not going to know the individual positions and velocities of each the 1023 particles, but we can make some very precise statements regarding their overall statistical behavior. As the gambler or the pollster will tell you, this behavior becomes more predictable when you have a larger number of particles. This principle is known as the law of large numbers or the law of averages.

So why is it useful to know how a system with 1023 particles can interact? Well, one familiar with Avogadro's number, 6.022x1023, knows that it is essentially the constant of proportionality between the macroscopic and the microscopic world. It's the number of atomic mass units in one gram, and therefore, 1023 is, very roughly, the number of atoms in about a gram of matter. So, almost any macroscopic system will have a ridiculously large number of particles of this order. Therefore, any macroscopic system, any system of "normal" scales by our human standards, can and should be described using statistical mechanics.

A basic problem: The drunken sailor*

               o
               |
_______________|_______________
|__|__|__|__|__|__|__|__|__|__|

Imagine a drunken sailor is standing on the sidewalk at a lampost in the middle of a city block. He begins walking, and each step he takes can be either to the left or the right. We assume the length of his step is always exactly one foot, and that he has an equal probability of walking in either direction. Thus, each step he takes is completely independent of the last step. Where will he end up after he's taken N steps? The real question we should ask is, what is the probability of being M feet to the right of the lampost after N steps (assuming "-M feet to the right" means "M feet to the left")?

Why is this a problem in statistical mechanics? Well, it deals with probabilities. An equivalent question could be: we have a group of N electrons, spaced far enough apart to not interact. Each has a spin in the z-direction of plus or minus one. What is the probability that the total spin of all the electrons is equal to M? This is identical to the drunken sailor problem, since we could look at the spin of each particle as a "step" in the walk, which either increases or decreases the value of M. Since the spins do not interact with each other, each spin direction is independent of every other one, and we have no way of predicting the value of a given spin. However, we can calculate the probability of having a total spin M by the same method as in the drunken sailor problem, which follows:

Let's go back to the drunken sailor analogy, since that's more colorful and less politically correct. Presumably, after taking N steps, he has moved n1 steps to the right and n2 steps to the left. Note that,

N = n1 + n2

M = n1 - n2.

Now, if we were asking what the probability is of moving first n1 steps to the right and then n2 steps to the left, the probability would simply be:

(1/2)x(1/2)x(1/2)x... = (1/2)N.

However, we want to know the probability of taking those steps in any order. Therefore, we can simply multiply this number by the number of possible rearrangements of steps. This is a simple matter of combinatorics. The number of possible permutations of steps is N!, but since all of the left-steps are indistinguishable, as are the right-steps, we divide by the number of possible permutations of left-steps and right-steps. In the end, we get that the number of possible paths which involve n1 steps to the left and n2 steps to the right equals:

N! / n1!n2!

So that the total probability of being M feet to the right of the lampost is

P = (1/2)NN! / n1!n2!

= (1/2)NN! / (1/2(N + M))!(1/2(N - M))!

As you might guess, this looks basically like a bell curve.

|
|                    |
|                   |||
|                   |||
|                  |||||
|                  |||||
|                 |||||||
|                 |||||||
|                |||||||||
|               |||||||||||
|             |||||||||||||||
|_________|||||||||||||||||||||||__________

Statistical Description of Temperature and Entropy

Let's say you have some statistical system, like a box with a zillion particles inside of it. Okay, let's say there's 1023 particles. Anyway, call this number N. We don't know the velocities of all the particles, but we do know the total energy of all the particles inside the box. How many possible configurations of particles (positions and velocities) are there which could have this energy? Obviously this is an incredibly large number. We call this number Ω(E). It is the number of possible states corresponding to a given energy. For reasons that will become clear later on, we will wish to work with a much more manageable quantity, the logarithm** of the number of states. This retains most of the properties of Ω(E), but instead of being of the order 1023, it is of the order of 23. So, we define S = k log (Ω(E)), where k is a numerical constant with units of energy, and we call S the entropy of a system. Thus, the entropy is directly related to the number of possible states of a system with given parameters. The larger the number of possible states, the larger the entropy.

Now let's say we have another, different, box, which has a different amount of energy, E'. This box has a number of possible states equal to Ω'(E'). Note that this is a different function of the energy, because the box may be a different size, but the physical interpretation is still the same. Now, if we consider the two boxes as a total system with total energy ET = E + E' (although the boxes do not interact), the total number of states is just the product of the two:

ΩT(ET) = Ω(E)Ω'(ET - E)

Now, let's assume these two boxes can exchange energy (but not particles). Then, we can ask, if the total energy remains constant at ET, what final energies do the two boxes end up at?

Well, to be truthful, this is a probability question, and what we are really asking is: What is the most likely energy state for these two boxes to be in? Note that the probablility for a given state must be proportional to the number of possible states that can be occupied by a given energy:

P(E) = C Ω(E)Ω'(ET - E)

Now, these Ω functions are extremely large, and increase at a fantastic rate with the energy. Therefore, this probability is extremely sharply peaked at its maximum. So, asking about the energy of maximum probability is really asking about what the final energy is going to be, to an incredibly small degree of error (of the order 10-23).

Now, we merely need to find the maximum of P(E). To do so, we employ a method which simplifies the calculation immensely. We find the maximum of the logarithm of P. Since the logarithm is a monotonically increasing function, the maximum of log(P) is the same as the maximum of P. To find the maximum, we employ Calculus techniques: specifically, the method of Lagrange, in which we find the point of zero slope:

d(logP)/dE = 0

But log P = log C + log Ω(E) + log Ω'(ET - E)

So d(logP)/dE = d(log Ω(E))/dE - d(log Ω'(E'))/dE' = 0

And so, the energies will settle down to a point where

d(log Ω(E))/dE = d(log Ω'(E'))/dE'

Now, define β(E) = d(log Ω(E))/dE and the result becomes:

β(E) = β'(E')

This equation gives the point at which the probability is the maximum, and thus, all physical systems can be described by some "beta" function for which, when two systems are put in contact, they approach an equal value. Sound familiar yet? Now we define the dimensionless quantity:

T = 1/kβ, where k is the same numerical constant as above.

Thus, T has the same sorts of properties as β. It obeys the same equation as β above, that is, two systems with different values of T will approach the same value of T when put in contact.

"T" is what we mean by the temperature of a system. It has an entirely statistical description. None of this discussion required that I was talking about a bunch of particles in a box; I could have been describing a series of particles with spins in a magnetic field, or any other statistical system with some energy. The temperature has a statistical meaning from which we can derive its thermodynamic meaning.

Note that, given the defining equation for β and the defining equation for T,

1/T = kβ = k d(log Ω(E))/dE = dS/dE.

Putting all of this together, the condition of maximum probability can be written as a condition of maximum total entropy:

S + S' is maximized

Which leads to the condition that the temperatures are equal:

T = T'

From all of these definitions, the basic laws of thermodynamics can be derived using only statistical calculations.

The derivation of the three laws of thermodynamics

1. The first law is really not derived, it's more of a statement of the conservation of energy. The total energy of an isolated system must remain constant, and if a system is put in contact with another system, the change in its energy is equal to the work done on the system plus the heat it absorbs; the energy can't be created or destroyed. The heat is usually denoted by Q, and refers to the energy being used to increase the entropy of a system.

ΔE = W + Q

One might consider this first law as a definition of heat.

2. The second law is derived from the two equations we wrote above:

S + S' = maximum and 1/T = dS/dE.

The entropy of an isolated system (like our system of two boxes) always approaches a maximum, because this is the point of extremely sharply peaked maximum probability. Since our definition of heat implies it is the energy used to increase entropy, we can rewrite the differential equation:

dS = dQ/T

This is the equation for a system which is not isolated (e.g. looking at one of our two boxes separately). When a system absorbs an amount of heat dQ, its entropy increases by the above equation.

3. The third law states that as T approaches its minimum (usually zero), the entropy approaches some constant value (it doesn't increase without bound or oscillate between two values). This is mostly a statement that there exists some minimum ground-state energy, E0, for which there exists some constant number of states. Thus, as E → E0, S → S0. The only additional piece of information comes with noting the connection between energy and temperature.

Using the equations above, it is a short exercise to notice that dT/dE > 0, and therefore energy is a monotonically increasing function of temperature, implying as the energy approaches its minimum, so must the temperature. Thus, as T approaches a minimum, S approaches a constant.

F. Reif's Fundamentals of Statistical and Thermal Physics was used often as a reference for this writeup. For further study, I highly recommend it as a clear, if dry, treatment of the subject.


*Apologies to any sailors reading this. I know you're not all drunks, so please don't get belligerent.
**I use the notation "log" for logarithm, where most people use "ln". In math and physics, I feel there's really no need to distinguish between natural and decimal logs, since we never use the latter. Hence, I use the one that looks more like the word "log".

Sodium thiosulfate (Na2S2O3), also known as sodium hyposulfite, is a sodium salt which sees common use in photography. Its natural form is a crystalline solid similar to common table salt (sodium chloride), though most of its uses occur in solution. Most of its useful chemical properties are consequences of its status as a reducing agent, able to alter the structure of other chemicals in a number of useful ways.

In photography, a solution of sodium thiosulfate is often called 'hypo', and is used as a fixer in the process of developing film. A fixer's purpose is to wash all the light-sensitive chemicals off of the film while leaving the developed silver compounds which make up the image. It is so named because the film is no longer light sensitive after treatment in the fixer, and thus the image is 'fixed' and can no longer be changed. The thiosulfate ion S2O32+ is the component of sodium thiosulfate responsible for this effect, so other similar compounds, like ammonium thiosulfate, also act as fixers. The fixer solution in a darkroom can be identified by its strong smell of sulfur.

Sodium thiosulfate is useful for other things besides photography, however. In solution, it breaks iodine and chlorine molecules into ions, removing many of their original chemical properties. In the case of iodine, this is useful for both chemistry experiments and removing iodine stains from surfaces. For chlorine this is useful for de-chlorinating water, as chlorine ions are innocuous in solution and do not affect taste or micro-organisms. Water dechlorinated with sodium thiosulfate is safe to drink, as sodium thiosulfate is non-toxic in relatively low concentrations. In fact, it is even used medically as part of a treatment for cyanide poisoning. The reducing properties of thiosulfate are also used in leather tanning, in combination with acids based on chromium.

Though it is completely non-toxic, strong solutions of sodium thiosulfate as used in darkrooms can cause serious chemical burns on skin or eye contact. Thus, when handling darkroom fixer, one should always wear gloves.

Produced as a by-product of sulfur dye manufacture, sodium thiosulfate has found a number of uses both in and out of the world of photography.


For BQ2K6. Copyright 2006 under the usual Creative Commons BY-NC-ND licence.

Novum Organum

or

The New Organon:
Directions for the Interpretation of Nature


or

Where Did All This Neat Stuff Come From?

It is tempting to view history through the lens of Great Men, or Great Ideas. It is probably more correct to see historical progress in terms of people and broad-based movements. But there are occasional examples of milestones in history so profound and meaningful that they assume a stature so elevated that one must look at them in awe, and remark with a trembling awareness of accomplishment -- "That changed everything." Francis Bacon's New Organon is one such milestone. Here we find nothing less than the birth of modern science.

What is the New Organon? It is a book, a philosophy, a rebellion against nearly two thousand years of established thought. The "old" Organon was a system of deductive logic established by medieval scholastic philosophers, drawing on the teachings of Aristotle. The New Organon, published in 1620, was an essential element in Lord Bacon's "Great Renewal" -- a blueprint for a new instrument upon which to base the scientific enterprise.

Death to the Syllogism!

Bacon's work is both destructive and creative. In order to create the new science, the old order had to be destroyed. And the old order was that of Aristotle and deductive logic, symbolized by that ubiquitous artifice, the syllogism. The syllogism is a logical form requiring three parts, a major premise, a minor premise, and a conclusion. For example:

Major Premise: Beer is Good
Minor Premise: Stag is Beer
Conclusion: Stag is Good
One can already begin to see the troubles with this, but the major problem is that the philosopher making the argument has no methodology for investigating the premises themselves. Rather than waxing philosophical and constructing argument after argument, the path of true knowledge lies in investigating the world itself. To this end, Bacon argues we ought to utilize induction, rather than deduction to understand the world around us. And understanding our world is important because it then allows us to control and manipulate our environment. Aphorism III:
Human knowledge and human power come to the same thing, because ignorance of cause frustrates effect. For Nature is conquered only by obedience; and that which in thought is a cause, is like a rule in practice.
Unlike the magicians and theoreticians before him, Bacon places humanity back into the natural world, which we can tame and control, but only if we play by nature's rules.

Be Crafty

While those priests and starry-eyed dreamers have been obsessing over Aristotle and producing nothing, there have been people intimately involved in exploring nature -- craftsmen. Blacksmiths, Architects, Cobblers, Bakers, and the like have been slowly learning how to perfect their trades. It is here that Philosophers must look for the origins of the new science. But these doers of deeds have their own problems, chief amongst them that they're illiterate savages. A man can work his trade for forty years, perfecting the baking process and producing the most wonderful bread that the world has ever seen... but is all for nought, for when he dies, he takes his knowledge with him. The Philosophers must leave their monasteries, towers, and well-upholstered dens and learn from the craftsmen, and then write it down. This, at least, is the first part of the new instrument.

Philosopher + Craftsman = Scientist

Bacon explains the middle path between the two in Aphorism XCV (95, for those of you unhip to Roman Numerals), the ant, the spider, and the bee:

Those who have treated of the sciences have been either empiricists or domatists. Empiricists, like ants, simply accumulate and use; Rationalists, like spiders, spin webs from themselves; the way of the bee is in between: it takes material from the flowers of the garden and the field; but it has the ability to convert and digest them. This is not unlike the true working of philosophy; which does not rely solely or mainly on mental power, and does not store the material provided by natural history and mechanical experiments in its memory untouched by altered and adapted in the intellect. Therefore much is to be hoped from a closer and more binding alliance (which has never yet been made) between these faculties (i.e. the experimental and the rational).
If people could have easily understood and accepted this truth, the rest of the book would be superfluous. This is science: the synthesis of philosophy and practicality.

The Four Idols

Perhaps the most famous portion of the work is Bacon's deconstruction of the illusions present in the human mind: the four idols of thought.

Idols of the Tribe are problems with our sensory perception. These are illusions shared by all humankind due to the inherent fallibility of our senses. Just because the world looks flat, does not mean that it is. Just because you hear your own echo, does not mean that someone is answering you. Escaping from this idol is a matter of knowing our shared limitations, and augmenting our senses with artifacts (rulers, telescopes, particle colliders, etc.)

Idols of the Cave are problems for the individual. Quite apart from our shared (mis)perceptions, each of us has his/her own prejudices, false beliefs, vanities, and other baggage that color our view of the world. Know this, and try not to be a prisoner to your own preconceptions. This idol also argues for the creation of a scientific community -- one man's prior beliefs may cause error in his theories, but not in everyone elses.

Idols of the Marketplace are the opposite -- mistakes made through the agreement of men. The problem here is fundamentally one of language. The words we use may say things we don't mean, or that we have no evidence for. Scientists should strive for clarity and exactitude in their language. Remember this the next time you're stuck reading something overly technical -- scientific jargon is a feature, not a bug.

Idols of the Theatre are false dogmas or theories, presented almost as a form of art. Bacon's recurring example is the uselessness of the syllogism, considered in his time to be the highest form of philosophy. But what has the syllogism ever accomplished? Nothing! I don't care how revered or famous a theory is, if can't stand up to rigorous scrutiny, throw it out!

The Project

In addition to destroying the old order, laying out a new direction for how to do science (in addition to induction, Bacon prescribed making hypotheses, falsifying them, and submitting them to peer review), and cataloguing the common mistakes men make when attempting to understand the world; Bacon inaugurated a project of Natural History. His new breed of scientists were to build a library of "Natural History", what we would now call a Database of Scientific Knowledge. Scientist Philosophers were to take the knowledge of craftsmen, use the new method of true induction to distill this practical knowledge to scientific knowledge, and record it. This record could then be used in other practical applications as a glorious project for the future of humankind, so that we could advance industry, cure diseases, fly to the moon, and share in the joy of indoor plumbing.

I have to admit it's getting better
A little better all the time
(It couldn't get any worse)
1


Is Thomas Kuhn's theory of Scientific Revolutions correct? Is the progress of science a gradual and additive enterprise? Or is it a series of extraordinary breakthroughs completely different from the “normal science” in between them? Answering this question in the negative is problematic. Although Kuhn's theory is intended to apply only to the sciences, one can draw parallels to other areas of human intellectual development. Anyone who would claim that intellectual development is a fundamentally additive process cannot, therefore, agree or disagree with Kuhn completely. One can imagine a conversation between intellectual historians:

A: Thomas Kuhn is wrong. Scientific Knowledge is additive.
B: All right then, what did he add?
The most fundamentally important addition Kuhn made was the emphasis he placed upon social aspects in scientific behavior. Scientific paradigms are not ethereal forms or mechanistic rules of conduct, but shared theories, rules, and values held by communities of individual scientists. Kuhn's view of scientific progress turns our attention to its human nature. And with our attention thus turned, we begin to see the uncertainty of the enterprise – the glorious mistakes, jealous rivalries, neurosis, and pure genius speculation. Science cannot be viewed as a sort of computer or pocket watch, wound up in the Renaissance and moving steadily forward since then. At the same time, Kuhn's observations and theories are limited, and do not seem to accurately describe the progression of all of the sciences. On pages 171-2 of The Structure of Scientific Revolutions, Kuhn explicitly endorses an analogous reading of his work, comparing scientific “progress” with biological “progress”, almost going so far as to suggest the scientific process as being Darwinian. Kuhn's mistake is that he did not go far enough. The relationship is not analogous – human intellectual progress, by virtue of it being done by humans, is biological in nature. Science is an extended phenotype of the human species. In one sense, in relationship to “the universe” or “being” or whatever else you may set up to be an objective observer, science has no purpose, no progression beyond being an amusing diversion that we little ape-creatures indulge in. But from a human perspective, science has a progressive character – and the result is our greater ability to explain and manipulate the world around us.


I don't trust Paradigms, they're shifty.

Kuhn's The Structure of Scientific Paradigms provides a description of science sharply at odds with previous notions regarding historical progress, but that was over forty years ago. Rather than being the daring innovation it once was, Kuhn's theories are now the closest thing to conventional wisdom in studies of the History of Science, and have been broadly applied to other disciplines. Our place, in the first decade of the twenty-first century, cannot be to resist his arguments, but to go beyond them.

Kuhn describes previous theories of scientific development as gradual and iterative – scientists proposing hypothesis based upon observational data, building experiments to test these hypothesis, discarding bad ideas, constructing theories out of good ideas, and then using these theories to propose new hypothesis. Lather, rinse, repeat. Kuhn partially agrees with this conception of science, calling it “normal science” working within an established “scientific paradigm”. For Kuhn, normal science is the act of solving puzzles, within an established framework of scientific thought. For example, the Copernican revolution established that the planets all revolved around the sun in uniform motion. This paradigm of thought was accepted (after much initial resistance) because it accurately predicted the solar year better than previous geocentric models. But the Copernican Heliocentric model was slightly off when predicting the orbits of the other planets around the sun, and it thus fell to normal science to bring known facts about the places of the planets in line with the Copernican theory. This is the normal function of science – basic puzzle solving, with no revolutionary changes. Indeed, normal science does not react well to revolutionary ideas, and the scientific community usually greets them with skepticism, if not outright hostility. Only after a significant amount of time and energy has been spent on debate does a scientific community accept a new paradigm en masse, with a theory going from controversial to common wisdom in the blink of an eye.

The previous example provides us with an immediate problem with Kuhn's theory. What sort of science was Kepler performing when solving the problem of planetary motion? Before he began his calculations, he had accepted the Copernican paradigm, but most others in the scientific community had not – it was his explanations of planetary motion that led to the paradigm's acceptance. Kepler's astronomical calculations certainly have the feel of normal science (from Kepler's perspective) and the feel of a paradigm shift (to people after him). Kuhn indicates that Kepler's discover of the planetary laws of motion is archetypical of a paradigm shift. Scientists who were working under the Copernican theory found anomalies that could not be explained sufficiently with the existing rules, this provoked a crisis of faith in the system, this crisis was solved with Kepler's new theory, and the resolution was of a new Copernican-Kepler paradigm that more accurately accounted for planetary motion. What is this new paradigm? While one can interpret it as a refutation of Copernicus, Kepler didn't seem to think so, viewing his theories as a refinement of the Copernican Theory. Refining theories is what normal science is supposed to do, according to Kuhn, and refined theories are not supposed to create new paradigms. Except for when they do.

Since Kuhn took great care to analyze the sociological and psychological backgrounds of scientists, I do not think he would begrudge me for attempting to analyze his. He readily admitted that the examples drawn from in the book were from a few limited fields of study that he knew well. One might point out that in the history of science there are two major scientific revolutions that follow Kuhn's description of paradigm shifts almost exactly – those of Newton and Einstein. It should not be ignored that Thomas Kuhn received his undergraduate degree in physics. It is tempting to abandon these two cases and concentrate on murkier scientific progress, but some temptations are too strong not to resist.

Newton appears to us, in historical hindsight, to be a towering figure of genius, carving a new understanding of physics that stood unchallenged for two centuries. At a time when his contemporaries were engaged in explaining the actions of the universe only through matter and motion, he had the vision (perhaps inspired by his intense religiosity) to imagine forces, and the genius to create an entirely new branch of mathematics to explain them. Newton himself (in a letter to Robert Hooke) would disagree, “If I have been able to see farther, it was only because I stood on the shoulders of giants.” Comparing Newton's Principia Mathematica to Descartes' Principles of Material Things (an example of the prevailing mechanistic physics that existed before Newtonian physics) one is struck not by their differences, but the degree to which they agree. Newton's laws of motion are almost identical to Descartes'; there is an argument to be made that Newton merely revised mechanistic physics – a major revision, to be sure, but an additive one. Indeed, one can also see hints of Aristotle's physics in Newton's – he rehabilitated the lost idea that an object can have within it qualities that compel motion with no other object acting upon it (although Aristotle's causes of motion differ greatly from Newton's, they share the idea of elementary forces compelling action). Perhaps a measure of Newton's genius was that he still read Aristotle2.

In Einstein we have another example of what a major paradigm shift looks like: a Swiss patent clerk working on the outskirts of a community ensconced within a paradigm provided revolutionary theories which were later proved to the satisfaction of the scientific community to such a degree as to inaugurate a new way of thinking about the universe itself. Indeed, his contemporaries were fond of suggesting that Einstein's papers from his “Miracle Year” advanced physics twenty years. And while Einstein was still busy arguing over the details of his theories, younger physicists quickly adopted them and created the study of quantum mechanics. Was it normal science to solve the puzzles created by Einstein? Or was it extraordinary science to create a new field of physics that even Einstein resisted at first?

Kuhn's view of science is compelling when one looks at a broad outline of the history of science, one can see a pattern of “normal science” and “extraordinary science” operating in tandem. When one looks closer, one sees more and more instances of “extraordinary science” and paradigm shifts operating within “normal science”. But we can also see long periods of “normal science”, with no obvious “extraordinary science” that nonetheless seem to have undergone a “paradigm shift” over a period of decades or centuries. Even in the archetypical scientific revolutions of Newton and Einstein, one gets the feeling that the science they do is characterized not by its nature, but by its speed. They are conspicuous because of the speed with which they reached conclusions that had eluded others – but the difference between “normal science” and “extraordinary science” seems to be one of degree, not of kind. And if Kuhn's view of science is supported most by the progress of physics, it doesn't seem to correlate at all to the biological sciences.


On the Origin of Theories

In trying to correlate Kuhn's theory to modern biology, the only really sensible analysis seems to be that before Darwin (and Wallace), biology existed in a pre-paradigmatic state driven by theology, metaphysics, vestigial Aristotelianism, and some stamp-collecting (i.e. species classification). After Darwin and Wallace published their theories, the scientific community fought it out, and afterwards Darwinian Evolution has served as the primary paradigm for all of the life sciences. Or in the words of Theodosius Dobzhansky, “Nothing in biology makes sense, except in the light of evolution.”

The problem with this reading of the history of biology is that it doesn't match up with historical reality – at least not in the same way that the history of physics or chemistry does. While Darwin and Wallace's theory maintained a place of prominence in biology through the end of the nineteenth century and early part of the twentieth century, it was not until R.A. Fisher synthesized natural selection (in the 1930s) with Gregor Mendel's re-discovered early genetic research that evolution became the dominant driving force in biology, occupying a place easily identifiable as a scientific paradigm. It was even later, with Gould and Eldredge's Punctuated Equilibrium that paleontologists and geologists came on board (in 1972, over a century after Darwin published The Origin of Species). This seems like an awfully long time for a paradigm to take effect, and also – each of the above authors was certainly writing within a Darwinian framework. There are problems on the other side of the historical timeline, as Kuhn himself notes:

When Darwin first published his theory of evolution by natural selection in 1859, what most bothered many professionals was neither the notion of species change nor the possible descent of man from apes. The evidence pointing to evolution, including the evolution of man, had been accumulating for decades, and the idea of evolution had been suggested and widely disseminated before. (Kuhn, 171)

Darwin's contribution was the idea of natural selection, unguided evolution based upon a species ability to reproduce itself, rather than some goal-oriented process. It was this idea that caused his theory to be treated with skepticism by so many, but in many respects it was just another stepping stone towards better understanding of the biological process. If Darwin had presented his idea to a group of cynical existentialists, they probably would have merely yawned. This suggests a wholly different critique of Kuhn, his utter disregard for the social and political climate within which science must operate, but that is a subject for another paper.

The early life sciences in the early nineteenth century explored the idea of evolution (in particular, Lamarck proposed a differing sort of evolution that relied upon vague ideas of teleological change), and the diversification of species. Darwin and Wallace's work explained how randomly mutating traits, inherited by successive generations that managed to reproduce, resulted in diversification and non-teleological evolution. Gregor Mendel explained how traits were inherited, although not why. R. A. Fisher explained how genetics and natural selection were partners and not competing theories. Watson and Crick unraveled the double helix, explaining Mendel's why. Eldredge and Gould explained how species were generally stable populations, but that isolated populations could rapidly diversify, thus solving a number of problems with the fossil record. Other scientists have shown natural selection acting in the wild in a time frame within a human lifespan, explored our own convoluted evolutionary history (in and out of Africa again, etc.), decoded the human genome, or done any number of things which both confirm evolution and widen our awareness of its applications.

Any number of these things could be described as a Kuhnian “paradigm shift” in that they broaden the understanding of the scientific community, solve certain anomalies, and provide more puzzles for future research. But none of them contradicted the fundamentals of Darwin's theory. And most of them prompted the same reaction that T. H. Huxley had when he first read The Origin of Species, “How extremely stupid not to have thought of that!” To be sure, Kuhn's view of scientific behavior is not completely absent from the history of biology. Any group of people has the capacity for groupthink and most of us see what we expect to see when we examine something. The above post-Darwinian biologists found Darwinian answers to problems when they examined those problems through a Darwinian mental lens. But their discoveries seem to be mostly “normal science”, with only Fisher prompted by a “crisis”. And yet if one is to say that there have been any paradigm shifts in biology other than Darwin/Wallace, almost all of the above would seem to fit. There is a better theory to explain the history of biological development, actually all intellectual development – and it is an extension of the modern neo-Darwinian synthesis.


Extend the Phenotype, Meet the Meme.

In normal discussions regarding genetics and biology, organisms are described as having genotypes and phenotypes. Encoded within my DNA is a string of chemicals that determine eye color; they manifest themselves in a phenotype usually described as hazel. This is a fairly easy concept to grasp, as an individual considers the lines of code that determine (along with environment) most of our physical characteristics. What is harder to grasp, but no less true, is that our brain is a phenotype, and it contains within a chemical soup, a morass of electrical impulses that we perceive as ideas. Our ability to recognize things, to remember them, to critically evaluate them, are all expressions of our DNA. The ideas we have are interactions between our phenotypical reason and our environment.

Richard Dawkins put forth the proposition in The Selfish Gene that the ideas humans have behave in a manner similar to genes, with regards to their propagation throughout a population. He coined the term “meme” to explain this phenomenon. Kuhn comes very close to this idea when he describes science as analogous to Darwinian evolution – in that it has no teleological goal. But memes, unlike paradigms, are atomized (like genes). As we have seen, scientific frameworks are not monolithic paradigms like Kuhn describes, but rather convenient collections of ideas that look like overarching structures. Rather than completely overturning Descartes, Newton subtracted some bad ideas (plenum), added some good ideas (gravity) and created a new collection of memes that were accepted by his contemporaries and created a new framework for the physical sciences. Anything that can be called science includes a crucial meme first espoused in the western world (as far as we can tell) by Thales of Miletus – that the world has natural explanations (this is a crucial separation between the first philosophers and Greek Mythology, as typified by Hesiod's Theogyny). Viewed in this way, science can be seen to be additive, and yet include everyone that we would like to consider a scientist, from Aristotle to Hawking.

Kuhn’s view of paradigms does seem to accurately reflect some scientific behavior, and it is not difficult to see where “paradigms” might fit into a memetic account of intellectual development – just as genes express themselves in individuals, and memes express themselves in complex ideas – individuals comprise species, and complex sets of memetic propositions can be seen to comprise paradigms. Kuhn describes a process of paradigmatic evolution analogous to punctuated equilibrium in biological speciation. New “species” of thought (what Kuhn calls paradigms) tend to arise outside the stable centers of scientific thought, but then propagate and replace their predecessors when they show they are able to better survive in the scientific environment (solve more puzzles).

Science can also be seen to be progressive, although Kuhn does not disagree with the idea of a progressive science as much as some claim:

Later scientific theories are better than earlier ones for solving puzzles in the often quite different environments to which they are applied. That is not a relativist's position, and it displays the sense in which I am a convinced believer in scientific progress. (Kuhn, 206)
What is key here is recognizing the environment of which Kuhn speaks. In biological evolution, the environment is a host of factors for each organism – every other organism in the shared environment, and natural events. The environment within which scientific memes propagate is somewhat dependent on the natural world (insofar as the business of science is in interacting with it) but the puzzles of science are determined by humans – either colleagues sharing memes or the public at large availing itself of scientific discoveries. In this sense, scientific progress is not only possible, but it is the most successful enterprise in human history.


1. Paul is dead.

2. Although I don't have space for it in this paper, this idea can help resolve the Aristotle dilemma that seems to be the “first mover” for Kuhn's ideas. Namely, was Aristotle science? The earliest modern scientists had battled Aristotle's science until the consensus was that Aristotle was either “Not science” or “Bad science”. Kuhn argues that he was “Different science”. But given his continuing relevance (on rare actors such as Newton and Heisenberg), I would argue that he was “Early science” -- which is by definition (in a progressive view of history) both different and largely wrong, but still occasionally relevant.

Ampulex compressa, or jewel wasp, is a solitary parasitoid which uses cockroaches as the host for its larva. Its particular method of reproduction is astounding, using the specific administration of neurotoxins first to the thoroidal ganglia of a roach to induce a temporary paralysis, then to the subesophageal ganglion, completely disabling its escape reflex. The wasp's stinger is thought to be directed towards this specific region of the roach's brain by extremely sensitive detectors along the stinger. The effect is permanent, unlike the effect of the same poison injected into the roach's motor neurons.

Normal cockroaches are amazing evaders of potential predators, running 70-80 cm/s while changing direction based on shifting air currents over its anal cerci, but the placid state the cockroach enters after the second sting leaves it so open to suggestion that the wasp can actually hold onto its antennae and ride it, directing its movement into a burrow. The wasp is too small to be able to carry a paralyzed cockroach without the roach's help.

Once the hapless roach is situated in the burrow, the wasp proceeds to lay an egg on its belly. The wasp leaves the cockroach, which is completely immobile without the wasp's direction, and fills in the end of the burrow with pebbles. It then leaves to resume its normal life.

The roach remains docile while the egg hatches, and the larvae behaves as the larvae of other parasitoid wasps, entering through the abdomen of the cockroach. It eats the internal organs in an order that keeps the cockroach alive for the duration of the gestation period. The larvae then weaves a cocoon within the cockroach's abdomen, eventually emerging from the cockroach as a fully developed wasp.

The venom the wasp uses to produce this amazing change in cockroach behavior has not been identified by any known catalyst, due to the evolution of the toxin to the specific purpose of deactivating roach brains.

This is the first fully documented case of a parasitoid injecting venom directly into a host's brain, though there are other species suspected of using similar methods, such as Liris nigra's relationship with crickets.

sources: Direct Injection of Venom by a Predatory Wasp into Cockroach Brain -Gal Haspel http://www.bgu.ac.il/life/Faculty/Libersat/pdf/JNB.2003b.pdf Escape Behavior in the American Cockroach-Joseph Sullivan http://soma.npa.uiuc.edu/courses/physl490b/models/cockroach_escape/roach_escape.html The venom of Ampulex compressa--effects on behaviour and synaptic transmission of cockroaches. (Piek T. et all)