This is Everything2's science writing group, existing to encourage, facilitate and organise the writing and discussion of science on this site. Members are usually willing to give feedback on any writing on scientific topics, answer questions and correct mistakes.

The E2_Science joint user is here make it easier to collectively edit and maintain the various indexes of science topics: Scientists and all its sub-indexes, physics, astronomy, biochemistry and protein. More may follow. It also collects various useful links on its home node.

Note that there is also a separate e^2 usergroup for the discussion of specifically mathematical topics.


Venerable members of this group:

Oolong@+, CapnTrippy, enth, Professor Pi, RainDropUp, Razhumikin, Anark, The Alchemist, tom f, charlie_b, ariels, esapersona, Siobhan, Tiefling, rdude, liveforever, Catchpole, Blush Response, Serjeant's Muse, pimephalis, BaronWR, abiessu, melknia, IWhoSawTheFace, 10998521, sloebertje, getha, siren, pjd, dgrnx, flyingroc, althorrat, elem_125, DoctorX, RPGeek, redbaker, unperson, Iguanaonastick, Taliesin's Muse, Zarkonnen, SharQ, Calast, idan, heppigirl, The Lush, ncc05, Lifix, Akchizar, Palpz, Two Sheds, Gorgonzola, SciPhi, SyntaxVorlon, Redalien, Berek, fallensparks, GunpowderGreen, dichotomyboi, sehrgut, cordyceps, maverickmath, eien_meru, museman, cpt_ahab, mcd, Pandeism Fish, corvus, decoy hunches, Stuart$+, raincomplex, Tem42@
This group of 71 members is led by Oolong@+

As an experimentalist, I feel bound to let experiment guide me into any train of thought which it may justify; being satisfied that experiment, like analysis, must lead to strict truth if rightly interpreted; and believing also, that it is in its nature far more suggestive of new trains of thought and new conditions of natural power.

The F Man

So. E = mc². Pretty cool huh? Here's how it works.

Einstein's proof is very short, and in the grand scheme of things is rather simple. Let there be two systems of coordinates, K and κ, of values K = (x, y, z, t) and κ = (ξ, η, ζ, τ), such that the x and ξ axes line up, and let them move relative to one another at some speed v, such that a point at rest in κ (which we'll call the 'moving' system) has for its coordinates in K (the 'resting' system) ξ = x - vt.

Get that? It's complicated at first. Basically:

         K      κ
        |y     | η
        |      | v ->
        |      | 
        |      |
________|______|___.___
        |    x |       ξ
        |      |
        |      |
        |      |
        |      |

You can see (I hope) the κ system is moving with the velocity v in the direction of increase along the X axis (following Einstein, lowercase letters, except for K, will denote the "resting" frame, Greek letters the "moving" frame, and capital X, Y, Z the axial directions). So a point at rest in κ will be moving in K with the velocity v. Because of this, the value of the x coordinate will always be increasing. However, the value of ξ stays the same, since the object is at rest in κ. Thus, the transformation equation from K to κ along the X axes is ξ = x - vt. This is known as a Galilean transformation equation, since it doesn't take Special Relativity into account.

This is the general setup of the thought experiment. It turns out that the equation for ξ is really a little different, but with no loss of functionality we can assume that a body is at the origin of both systems for an instant, and we can set the clocks of both systems at that instant, so that in that instant t = τ = x = ξ = 0.

Now, in that instant, let that body, which we'll consider at rest in K, emit a sudden burst of energetic light in some direction, and a burst of the same amount of energy (as measured in K) in the opposite direction.

          K, κ
         | y, η
         |   /
         |  / <-- light emitted
         | / 
_________./φ       (pronounced, whatever your math professors say, 'fee')
       φ/|      x, ξ
       / |
      /  |
     /   |
         |

Now, in the resting system, the light emitted in each burst will have some energy 0.5L, so the total energy lost by the body, as measured in the resting system K, will be L. What about an observer in κ, does he measure the same drop in energy? No. The reasons for this will consume the body of this writeup, but the observer moving relative to the body will measure, for the burst of light moving in the direction of the body's motion,

             1 - (v/c)cos(φ)
Ef = 0.5L * -----------------,
               √(1 - v²/c²)

and, in the other direction,

             1 + (v/c)cos(φ)
Er = 0.5L * -----------------.
               √(1 - v²/c²)

Fortunately, these both simplify to

                     L
Et = Ef + Er = --------------.
                √(1 - v²/c²)

Now this is what I want to explore. Those of you who just want the rest of the proof can skip to the end. For the rest of you, who I hope are the majority of you, I present:

A Natural History of Light

by Kurin

A philosophic inquiry into where the poop did √(1 - v²/c²) come from?

The term √(1 - v²/c²), or its inverse 1 / √(1 - v²/c²), or γ as the latter is shortened to, is called the Lorentz contraction factor. It turns out to be the amount by which time dilates, length contracts, and mass increases, as a function of the velocity. Actually, the proper equation is not E = mc², but E = γmc², because the relativistic mass of an object increases with the relative velocity.

But where does this come from? Einstein didn't pull this relation out of his butt (though at times I think Lorentz did), so where in nature was it suggested to him? To answer this question, I think, you'll have to go all the way back to

Michael Faraday

Actually, I lied. This section isn't really about the history of light. A lot of people were trying to get into the theory of light, including such notables as Newton (who was convinced light must be made of particles) and Huygens (who thought maybe light was made of waves), and I don't talk about any of them here. Further, Faraday wasn't really investigating light. He was looking at electricity. But I still think it is in his researches that the modern (non-quantum) theory of light is founded.

I'm really not sure how much of a mad scientist Faraday was. These guys who worked with electricity were, to put it bluntly, nuts. They have passages like "and then such a potential was collected in the Leyden jar, that when touched to the tongue or to the hand, a violent spark was produced, and the senses for many tens of minutes were rendered altogether inoperative" and stuff like that. It's insane. Actually most of the old-time "natural philosophers" were like that; drinking acids and shocking assistants. Faraday, I'm sure must have shared in their spirit.

However, there is also an unimpeachable scrupulousness to Faraday that one wouldn't think compatible with the mad scientist archetype. It is his quote which opens this writeup. He is such an honest scientist. He refuses to speculate more than is necessary, or to bind himself to a theory that hasn't been completely suggested by physical observation. He didn't keep his mind closed; on the contrary he gave more to the subject than any other scientist. But he kept his eyes open, and refused to let his reason run ahead of his senses, so that what his reason did ultimately take up and examine were not half-shadows of speculative theory, but unassailable pieces of the physical world.

Anyway, I like the guy.

So Faraday wrote this three-volume set called Experimental Researches in Electricity, which is a collection of 29 series (think chapters) and some papers(think appendices), each dealing with an aspect of electric phenomena.

Faraday begins the Researches by taking up the phenomenon first noticed fairly recently (I think by Örsted) of electromagnetic induction. This is where an electric current in one wire seems, under certain conditions, to produce an electric current in another wire.

Faraday tries all sorts of amazing and bizarre experiments to get to the heart of the phenomenon. He has solenoids of many hundreds of feet of copper, and some of the currents he uses are pretty intense. In one example, he has two intertwining helices of wire, and through one he is sustaining a fairly severe current. In the other, he says, "I could obtain by no evidence by the tongue, by spark, or by heating the fine wire or charcoal, of the electricity passing through the wire under induction."

All who have studied basic physics in high school or college know the law that Faraday is heading (not stumbling!) for: that it is change in the electric current that causes a current in an adjacent wire. So he gives the law: if a wire pass near a magnet in such a way as to cut the magnetic curves, then the current will flow in a predictable direction.

This talk of 'magnetic curves' warrants closer attention.

When Faraday was writing, Newton was the God of natural philosophy. The Principia had come out about a hundred and fifty years ago. His method, of centers of force, and his mathematics, the calculus, had proven themselves brilliantly, and shed much light on many other aspects of the world, and their applicability to nearly every worldly problem was well known.

So there were many scientists, such as I think Ampère, looking at magnets, and at electricity, and trying to apply a centers of force model. This seemed, at first, pretty straight forward. It had been shown that the magnetic and electrostatic forces of repulsion and attraction obeyed the inverse square law, just like gravity. In fact, if you assume that each pole of a magnet is a center of force, and that a body is attracted to one pole, and repulsed by the other, in an inverse square relationship, you can exactly explain the phenomena. Then, it looks as if both electrostatic action and magnetic action are of the same kind as Newton's gravity.

So what's a magnetic curve?

The footnote where the term "magnetic curve" is first used reads: "By magnetic curves I mean the lines of magnetic forces, however modified by the juxtaposition of poles, which would be depicted by iron filings; or those to which a very small magnetic needle would form a tangent." He does not mean to say that magnetic curves are real, that they have being. He is just saying that if you traced an imaginary line in space, along which, say, a compass needle would point, then you have an imaginary line which, when cut by a copper wire, will give the law according to which current will arise in the wire.

The point is that it's not real. Yet. But as Faraday kept experimenting, in that gentle juggernaut way of his, he became more and more convinced of the physical being of the magnetic curves. Since he can never find incontrovertible proof that they are real, and are not some phantom shadows of other phenomena, he never says in the Researches that they do in fact exist, but only treats them as tokens of actual magnetic force.

After the Researches, though, he wrote a more speculative paper called "On the Physical Character of the Lines of Magnetic Force." In the opening paragraph, where he explains his treatment of lines of force in the Researches, he says,

The definition then given had no reference to the physical nature of the force at the place of action, and will apply with equal accuracy whatever that may be; and this being very thoroughly understood, I am now about to leave the strict line of reasoning for a time, and enter upon a few speculations respecting the physical character of the lines of force, and the manner in which they may be supposed to be continued through space.

So Faraday believed that lines of magnetic and electric force were real (and was able to give an account of the electric), but left the matter undecided. Lines of force, real or not, were good ways to give an accounting of the investigated phenomena.

Faraday's Experimental Researches are fairly exhaustive regarding phenomena, but they are not at all quantitative. Ever since Newton and Descartes, people had really loved to put the world under the lens of geometry and calculus, and here is this wonderful collection of investigative analysis that doesn't do that at all. Probably some men tried to fit this or that Series into mathematical form, but the real mind that went to work on this matter belonged to

James Clerk Maxwell

What did Maxwell do? Why, he spoke Faraday in the language of mathematics. But how did he do this?

Like the Researches above, I'm going to avoid as much as possible from talking about the content of Maxwell's "A Treatise on Electricity and Magnetism" in favor of discussing how I think that content got there in the first place. However, for some reason (probably for a very, very interesting reason) as I go from Faraday through Einstein, it gets more and more difficult for me to separate the content from the method. So I apologize if I get pointlessly technical.

The heart of Maxwell's Treatise is, of course, the mathematics which he brought to the subject. He gives Faraday to us in terms of what today we would call the vector calculus. This calculus fits almost exactly on top of Faraday's lines of force.

When Maxwell defines a line of potential, and shows that the integral along it in such-and-such a manner is equal to such-and-such a quantity, it is plain that he is integrating along a line of force, magnetic or electric. And when he shows that it is possible to move from point A to B without changing potential, we have Maxwell's equipotential surfaces, which play such a prominent role in his theory.

If Faraday saw a world full of lines of magnetic and electric force, Maxwell saw a world full of energy and potentialities. This way of thinking lends itself more or less directly into the language of calculus.

Now, and this is where I think the kicker kicks the kicked, Maxwell took phenomena that were known to Faraday and mathematized them, coming up with equations for the curl of the electric and magnetic fields. Of course, Maxwell was able to give accounts of all of the Maxwell-Hertz equations, but these are two we're specifically concerned with. Those of you familiar with this story will recognize these as Ampère's Law and Faraday's Law of Induction.

These equations are derived, remember, from the content in Faraday's Researches. In the Treatise, they are derived in and around Articles 531 and 585 (although Maxwell does add a term to complete them).

With these two equations, Maxwell is able to put them together in such a way that out falls the wave equation. This is the equation that says light travels at the speed c, where c is the ratio of the electrostatic units to the electromagnetic units.

What are the electrostatic and electromagnetic units? If you're examining a electric system, such as charged pith balls suspended from wire, and you want to quantify the situation, you will probably apply numbers to the relative charges on the pith balls. Then you can get the forces exerted on the balls, and work from there. These are electrostatic units. If, instead, you are looking at currents in wire, you might want to assign a value to that current, and work from that. These are electromagnetic units.

If you do the former, your fundamental unit is 'charge', and current is 'charges per second.' If you do the latter, your fundamental unit is current and charge is 'current times time.' It turns out that the two quantities of charge are numerically different, and that the units on them are different. The same is true, and in the same way, for the quantities of current and all other effects. The electrostatic unit of charge, say, is called the statcoulomb, while the electromagnetic unit of charge is the abcoulomb. If you divide the abcoulomb by the statcoulomb, or any ab-unit with its corresponding stat-unit, you get a constant, which just happens to be a speed, and is equal to about 3.0x108 m/s. Physicists usually call this value 'c'.

Maxwell has, I swear to God this is so cool, a thought experiment explaining how this ratio is a speed, and what that means. Imagine two plates in space, separated by some distance, and both positively or negatively charged. Now, two plates of the same charge will repel each other. BUT if the plates are moving, then they can be treated as two currents moving in the same direction. Two currents moving in the same direction attract each other. So at some point the electromagnetic attractive force will balance the electrostatic repulsive force, and the two will balance. According to Maxwell, the speed at which this happens is 'c'.

You may have some objections to this experiment. You might think, "But what if there is an observer on those plates, and that observer considers himself at rest?" It is clear that Maxwell did not believe such an observer could do that. For Maxwell, there was an absolute rest in the ocean of the ether, and experiments such as this one measured speed relative to that.

Another man who believed in absolute rest was

Hendrik Antoon Lorentz

Lorentz came into the party on the heels of the Michelson-Morley experiment, in which it was shown that Maxwell's equations conflicted with reality. Specifically, the speed they predict the ether to be moving past the earth should be measurable, but it is not. Lorentz postulates that moving bodies may shrink according in the ratio 1:√(1 - v²/c²). This is where the term first pops up. He shows that if his postulate is true, the phenomena are explained. I could say more but this writeup's getting real long now and he's not central to its story.

The next and final character is, of course, the man himself,

Albert Einstein

Now, I'm not going to run wild with mindless praise for the big man, I think that as far as Special Relativity goes, he wasn't the first to see the entire picture. But 1905 was a big year for him, and he did put it all together.

Einstein took Maxwell's equations and said, "They predict light to travel at one speed, and one speed only. That means they can only be valid when light actually travels that speed. But we've done experiments, and it looks like light always travels that speed. What if the Maxwell-Hertz equations are always valid?"

And those are the ONLY two postulates in the Special Theory of Relativity:

  1. The speed of light is a constant, and
  2. The laws of physics are covariant in any unaccelerated frame of motion.
What this means is that Maxwell's equations are good for any observer who feels no acceleration.

This requires a change of the familiar Galilean transformation equations. Whereas before we had τ = t, now we have τ = γ(t - vx/c²); where we had ξ = x - vt, now it is ξ = γ(x - tv), where γ = 1/√(1 - v²/c²).

Where do these equations come from? They are not too difficult to derive, but I will still only touch the surface. Given that light goes the same speed all the time, if you are at a point A and want to know what is going on at B (at rest relative to A), at a given time, you can do so easily by observing the light that B gives off, and then calculating what was happening at B at the time in question. That is, if the distance from A to B is r, then if you send a signal from A to B and it bounces back, you can tell how long the trip took with the easy formula,

t = 2 * r/c.

So you know that it took half the time to get there, and half the time to get back. This can be done with any two points in a given system of reference, K. So we say that the clocks in K are synchronized, because if you give me a value for a time at a point in K, it is very easy for me to tell you what the face of any other clock in K reads. The speed of light may limit how soon after the event we can do this, but eventually it is possible. For example, I can't tell you what is going on at the sun right now at t = 0, but in eight minutes at t = 8 I will be able to say what was happening on the sun and right here at t = 0.

What if a clock is not in our system of coordinates? Well, it turns out that the formula has to change, and

t = r/(c + v) + r/(c - v).

Since c is constant for us, we see the light take more time to go the same distance when the distance itself is moving in the direction of light's propagation, and similarly it takes less time to go the other way. When light goes orthogonal to the direction of motion, the time taken is

t = 2 * r/√(c² - v²),

because the light is moving along a diagonal.

What Einstein does it make τ a function of x, y, z, and t. Finding this, ξ is equal to cτ, and similarly η and ζ. The final transformation equations, as we have seen above, are

τ = γ(t - vx/c²)
ξ = γ(x - vt)
η = y
ζ = z,
where γ = 1/√(1 - v²/c²).

Finally, given these he is able to show the equations for Ef, Er, and Et from (way, way) above. So now we can complete

The proof

So we have the expression for the energy as measured in K (remember, K(x, y, z, t) is the "resting" system (with us at rest in it), and κ(ξ, η, ζ, τ) the "moving" system, with a body at rest in it, such that the body is moving relative to us). Now what? Well! Let the energy the body started with in κ be denoted by E0 and the energy it ended up with be E1, and similarly for K, H0 and H1. So

E0 = E1 + L, and H0 = H1 + γL.

Then,

(H0 - E0) - (H1 - E1) = L(γ - 1).

But H and E are measures of energy of the same body, and the only difference between them is the motion relative from K to κ. So the only way the quantity (H - E) can differ from the kinetic energy of the body is by some constant C which can depend on spin, for example, or heat, but not on anything that the emission of light would affect. So, since C is constant, and if V is kinetic energy

H0 - E0 = V0 + C
H1 - E1 = V1 + C

But how could the kinetic energy of the body change? K and κ have not changed their relative motions, and the body is still at rest in κ. Since V = 1/2mv², only a change in mass can account for the change in V0 - V1. Thus, if a body gives off the energy E, measured in its own system, its mass decreases by E/c². That is, E = mc².

So there you go.

Much thanks to Swap for a prepost reading! He caught some painful typos, and helped me clarify myself. Also wrinkly for postpost error catching. Also! jrn helped find many links, typos and such.

The Syntonic Comma: The Rift within the Lute

Definition and discussion

The syntonic comma is the ratio 81/80, equal to 1.0125 or (approximately) 21.51 cents. What does this have to do with lutes? Well, the fact that the comma exists means that it is impossible to tune a musical instrument that produces fixed pitches, in such a way that you have perfectly tuned intervals -- even within a single key or mode. Such instruments include the organ, the piano, the harpsichord and its ilk, and fretted instruments like guitars and viols (although a certain bending of pitch is possible here). Even if the only notes you use are the white keys on the piano, some intervals are inevitably out of tune by a small amount. (The Greek for a small subdivision or piece is komma.)

Conversely the fact that the comma is so close to unity (or its logarithm is so small) means that it is possible to play music in a given key quite well on fixed-pitch instruments -- at least well enough that most people don't notice any problem. This is due to the use of temperament, a system of sharing out the comma among different intervals so that none of them sounds too unpleasant. Still, such instruments can never produce pure chords, thus the distinction between consonance and dissonance is slightly obscured. By contrast, singers and instruments producing a continuously adjustable pitch (for example the violin or trombone) are able to achieve pure intonation within chords, at the cost of slight changes of pitch on what should be the same note.

The syntonic comma is not to be confused with the Pythagorean comma, which crops up when you try to tune all 12 chromatic notes of the scale so that each of them can be used as the root of a chord, producing a circle of fifths -- although by a mathematical coincidence the two commas are nearly the same size.

The syntonic comma arises from the fact that four perfectly tuned perfect fifths placed one on top of another results in an interval that is nearly, but not quite, two octaves and a perfectly tuned major third. For example the fifths c-g-d'-a'-e'' and the major seventeenth c-e''. (See pitch notation.) Mathematically, an octave is two notes whose fundamental frequencies are in the ratio 1:2; a perfect fifth perfectly in tune has the ratio 2:3; and a major third perfectly in tune is 4:5.

So the pure fifths have frequencies in the ratio 16:24:36:54:81 and the two octaves and a third have frequencies 16:32:64:80 (if we choose to fix the lowest note). Hence the discrepancy is the ratio 80:81. To play all of these intervals in tune your instrument should have keys or frets with both intervals (16:80 and 16:81) -- but actually, they turn out to be the same note. The syntonic comma also arises if we require a major sixth (ratio 3:5) to be consistent with three perfect fifths on top of one another. One example uses the notes g-d'-a'-e'' with ratios 24:36:54:81 and g-g'-e'' with ratios 24:48:80. (However, the major sixth, or its inversion the minor third, is not an interval where distonation is very noticeable.)

The easiest way to hear the syntonic comma is in tuning a guitar. The strings are arranged in perfect fourths E-A-D-G and B-E (which are just perfect fifths minus an octave) and one major third G-B. You can hear if the intervals are in tune by listening to beats -- the subject of another node. If we work out the frequency ratios from the bottom string to the top with perfectly tuned intervals we find 81:104:144:192:240:320 - but now the top E is not two octaves above the bottom E! Again the discrepancy is 80:81.

How wide is the rift?

Can you deal summarily with the syntonic comma and adjust just one single interval to be out of tune, leaving the rest pure? There are obviously two ways to achieve this (assuming that octaves are in tune): having one flat perfect fifth, or a sharp major third. For example the fifth d-a could be tuned as 2/3 x 81/80 = 162:240, so that the notes c-g-d'-a'-e'' are now in the ratio 36:54:81:120:180. This results in just intonation. Or we could have a Pythagorean scale with the major third tuned to be 64:81 rather than the pure 64:80. However, both sound too awful to be useful, except in a very restricted range of music. The reason is simple: these out-of-tune intervals are too far out of tune and have rapid beats.

For example we could start with a=440 Hz and tune d=297 Hz for the perfect fifth flattened by a syntonic comma. The beat frequency would be 11 Hz, which means that even if you played the notes for a small fraction of a second you could hear the distonation. We would be restricted to music that does not use chords with root D. Alternatively, the Pythagorean major third sounds too discordant to be the final chord of a section, so one would have to treat the major third as a discord. This may be one reason why medieval music used so many open fifths and fourths, particularly at cadences: the Pythagorean tuning simply did not have a consonant major triad.

Read on: meantone tuning

Botany, the scientific study of plants, is a branch of biology. Botany is a broad subject that covers several different scientific disciplines including plant evolution, disease, development, growth, metabolism and reproduction. A botanist may choose to spend his or her time studying plants on the microscopic and genetic levels, or he may choose to study the function of a specific plant and it’s place in the environment. Botanists today are responsible, not only for developing new medicines, but for keeping the world fed. More than two thirds of the world relies on plants as the staple of their diet. Without dedicated botanists, the global food crop would be in jeopardy. By experimenting with plants, botanists have brought us alcohol, wine, Aspirin, Coffee, Tobacco, Cotton, Paper and Rubber. Botany is one of the broadest fields of scientific study and can be enjoyed both as a profession and as a hobby.

Botany in History:
  • Historical botanical works exist from as early as 500BC. Descriptions of plants have been found carved in tablets.
  • In the fourth century BC, Theophrastus wrote volumes about the classification, morphology and reproduction of plants.
  • Ancient Egyptians studied plants to find the best times to plant them, we have found records indicating that the Egyptians cultivated olives, grapes and fig trees, but knew that the wood of a Fig tree was bad for construction.
  • In 1665 a man named Robert Hooke looked through a microscope at a piece of cork, and saw what he coined Cells. The idea of cells was new, and changed the way that we viewed ourselves as organic constructs.
  • In 1838 the German Matthias Schleiden studied the cellular nature of plants, and proposed that all plant tissue is made of cells. This implied a basic "sameness" in the structure of all living things.
  • In 1863, Gregor Mendel cultivated more then 30,000 pea plants and is the father of genetic inheritance. His work has allowed scientists to breed mice to target certain traits that are useful in scientific experiments.


Botany today:
  • Recently, Barbara McClintock discovered Transposons or Jumping Genes and was awarded the Nobel Prize for her work. It was a breakthrough, and redefined the way we thought about genetics.
  • A new study, Paleobotany, has recently become popular as the paleobotonists study fossils of plants and help track the evolution of major plant groups.
  • Plants have been in the news recently with companies genetically engineering plants to be resistant to disease. There is evidence that genetically engineered genes are showing up in native corn that was never modified.
  • New plants are discovered every day. Pharmaceutical companies actively hunt down new plants to find new medicines.
  • The Australian National University and the University of Melbourne are regarded as two of the best schools for the study of Botany.

Famous Botanists:
  • Lucy Braun: Preserved 10,000 acres of land in Ohio
  • Sacagawea: Guided Lewis and Clark
  • Gregor Mendel: Studied Pea Plants and discovered inheritance
  • Robert Hooke: Discovered Cells
  • Barbara McClintok: Discovered Transposons

To learn more: You can visit any of the sites below, and there are many quality books on the subject. You can pick up botany guide books at your local book store, or a guide to identifying plants at specialty stores in your area. Most community colleges also teach classes on botany.

Tools of a Botanist: Magnifying Glass | Notebook n' Pencil | Compass | Pruner | Trowel | Collecting Bag | Field Pack | Camera.

Sources:
- Botany.com. 2005. Tarragon Lane Ltd. 10 Mar. 2005 { http://www.botany.com}.
- ABIS: Home. 2005. American Institute of Biological Sciences. 10 Mar. 2005 {http://www.aibs.org/ core/index.html}.
- Crosby, Marshal R. "Botany." Encarta. 2005. Microsoft. 11 Mar. 2005 {http://encarta.msn.com/ encyclopedia_761573574/Botany.html}.

Oceanography is the diverse branch of science dealing with the study of the oceans using chemistry, geology, physics, biology and technology to analyze this unique environment.

History of Oceanography

Formal study of the world's oceans began during the late 15th and early 16th centuries and the time period following. Christopher Columbus' discovery of the Americas in October of 1492 and Ferdinand Magellan's circumnavigation of the globe from 1519 to 1522 were two important voyages encouraged further systematic study of the oceans. In 1588, when Sir Frances Drake's British fleet defeated the Spanish Armada, Spain's long dominance of the seas ended. James Cook, a British navigator, went on three trips to map the seas and to explore them further. His primary goal was to find a huge continent to the south that had been sighted by the French during various expeditions.

The HMS Endeavour took to the seas in 1768 and during this voyage, Captain Cook succeeded in mapping most of New Zealand's coast on the eastern parts of Australia. He used John Harrison's chronometer, which enabled him to accurately measure time and therefore determine latitude and longitude to a greater degree of accuracy. On his second voyage, Cook took HMS Adventure and HMS Resolution around the Cape of Good Hope and on January 17, 1773 he sailed beyond the Antarctic Circle looking for the southern continent he had heard about. Instead, he found South Georgia and the South Sandwich Islands and was unable to travel further southward due to thick ice blocking his way.

Cook went on his last expedition from 1788 to 1789 and discovered the Hawaii. He travelled northwards to the Bering Sea but heavy pack ice barred his way further northward. On his return trip he went to Hawaii again and was killed in an argument over a stolen boat. His various explorations yielded important maps, detailed charts and notes describing coastal conditions as well as observations from naturalists and biologists he had on board with him.

After his discoveries were published, exploration of the southern oceans increased, and trade through those areas increased. Due to this, American Matthew Fontaine Maury realized the importance of gathering data on ocean winds and currents. He was the director of the US Naval Department of Charts and Instruments, and in 1855, he published The Physical Geography of the Seas which was an important milestone of oceanographic study. Its value was realized immediately and the work was valued by all sea captains.

The voyage of the HMS Beagle was another important voyage in the history of oceanography. Charles Darwin signed onto this voyage as a naturalist under Captain Robert Fitzroy. The expedition was to last from 1831 to 1836 and after it, Darwin published The Structure and Distribution of Coral Reefs in 1843 and The Origin of Species in 1859. His work relied heavily on the observations he made while sailing on the HMS Beagle.

Modern oceanographic study began in the early 1900s by Johannes Schmidt, and later in 1910 by Johan Hjort of Norway. Hjort studied primarily in the Mediterranean and the North Atlantic. In 1912, he published The Depths of the Ocean to illustrate his findings. Schmidt took Dana I and Dana II on voyages through to the 1920s exploring three oceans and creating enough data and observation to write the Dana Reports which are still in use in today's study.

From 1925 to 1927, the German ship Meteor used advanced equipment over a period of two years to collect data on temperature and salinity at 310 different points across the ocean. This analysis proved that the ocean is perhaps the most stable environment on the planet. Using acoustics, water circulation was also analyzed by George Wurst who formuated a theory of circulation which has proved to be accurate and is still recognized as such.

After World War II, many nations began to develop programs of oceanographic study, such as the Scripps Institution of Oceanography in the United States, and the Woods Hole Oceanographic Institution. Since the 1960's, a large portion of oceanographic research is devoted to mineral research and the search for oil and natural gas.

Study of Oceanography

The oceans take up 71% of the earth's surface area, equalling approximately 139 million square miles. The oceans' average depth is 12,200 feet and the deepest point lies in the Mariana Trench in the Pacific Ocean and is 36,200 feet below sea level. The oceans contain nearly 300 million cubic miles of water, which rose to the earth's surface as the planet cooled millions of years ago.

Water is formed by two hydrogen atoms and one oxygen atom. The two hydrogen atoms bond to the oxygen atom by sharing their single electron with the oxygen atom in an asymmetrical fashion. The two electrons supplied by the hydrogen are pulled closer to the oxygen's nucleus, creating a polarity within the molecule and making the hydrogen end of the molecule carry a slight positive charge and giving the oxygen end a slight negative charge. This property of molecular polarity creates weak hydrogen bonds between water molecules, with the negatively charged hydrogen ends of each water molecule attracting the positively charged oxygen ends of other water molecules.

These bonds are relatively weak but are responsible for many of the physical properties of water. These characteristics create water's ability to hold heat energy well; it would be in gaseous form at room temperature otherwise and would not exist as a liquid on the earth's surface, making life impossible. Another effect of this is that water acts as a stabilizing force in terms of climate change as it absorbs and releases heat as it evaporates and freezes.

It is water's properties as a solvent that allow seawater to exist; water can dilite many substances into their composite ions, such as the salts present in seawater.

Salts Present in Seawater

Ocean currents are another important aspect of oceanographic study. The sun and the rotation of the earth are the primary causes of ocean currents, but there are many other factors that affect this as well. Winds create fluxuations in temperature along the ocean's surface and coupled with changes in salinity from the addition or subtraction of freshwater are what create thermohaline circulation, which is based on shifts in water density.

Winds also are created as a result of unequal solar heating and as such there are ascending and descending columns of heated air across the globe that split across the earth's surface once cooled enough to reach it. The rotations of the planet causes this air to be deflected as it moves away from the equator, a phenomenon known as the Coriolis effect. This is what creates the Trade winds and Westerlies in the northern and southern hemispheres. These prevailing winds have a great effect on the ocean. They make the surface layer of the ocean move around the continents. Between the 60° lines of latitude there are large pairs of oval shaped patterns of circulation which are known as "gyres." They flow in a clockwise manner in the northern hemisphere. In the southern hemisphere, the prevailing winds create a "ring" of water that moves around the earth between latitude lines 45° and 60° south. This ring doesn't exist in the northern hemisphere because land masses prevent it from being created.

This is just the most basic overview of oceanography as a science; the occurances of tides, wave patterns, subsurface currents, and ocean chemistry are all vital to its study as well. Much of what I wrote about in my writeup in the ocean node applies here and vice versa. Oceanography is fascinating in how it draws from so many brances of science as it examines one of the most beautiful envorinments on earth.


Sources

http://www.odysseyexpeditions.org/oceanography.htm
http://www.personal.kent.edu/~dwitter/oce/s2005/study-questions-mt1.htm
H.V. Thurman. Introductory Oceanography. Macmillan, New York, 1994.
D. Wilson. The Circumnavigators. Constable, London, 1989.

Summary

Recent advances in neuroimaging through functional magnetic resonance (fMRI) have offered the hope of uncovering the physical mechanisms behind seemingly metaphysical cognitive processes that shape emotion, sexuality, social interaction, and consciousness—topics that have fascinated and frustrated scientists and philosophers for millennia. Under investigation among these perennial puzzles of human behavior is deception, the understanding of which could profoundly affect law enforcement, anti-terrorism, medicine, and even the day-to-day lives of average individuals. Yet the usage of fMRI to reveal the complex patterns of neural activation underlying even the simplest lie is still in its nascence. Technical and ethical hurdles abound between research into deception in the lab and practical results for the real world. In light of these difficulties, the application of fMRI to define and detect deception currently remains infeasible.

Introduction

The deliberate negation of truth by a human being, the communication of beliefs that the individual considers untrue, has remained one of the great philosophical, religious, and scientific fascinations of our species. From the writings of the Ptahhotep peoples of five millennia past (Chinweizu 2001, cited in Spence 2004a) to the Ten Commandments of the Hebrew Scriptures to the theories of Sigmund Freud to the most recent updates of the American Psychological Association's Diagonistic and Statistical Manual of Mental Disorders, evidence attests to the omnipresence of lies amid human communities. Until recently, discussion about the signs of deception and the workings behind it could only rely upon external evidence.

For example, the polygraph machine, more commonly known as the 'lie detector,' can only monitor changes in pulse and breathing rate, perspiration, and blood pressure that correlate with anxiety (Knight 2004), a supposed consequence of the emotion of 'guilt' considered to accompany deception. This method of detecting and evaluating deception lacks fundamental credibility, as it relies upon physiological changes that only correlate with the telling of lies, allowing no conclusions to be made about deception's source itself: the brain.

Without reliable access to the cells of a nervous system whose staggeringly complex patterns of firing underly every human behavior, the sharp delineation of 'lie' from 'truth' remains beyond grasp. Only in the past decade, with the introduction of fMRI, has bridging the chasm between our selves and our minds in order to learn the true nature of deception entered the realm of possibility. Theorizing a bridge and building a bridge, however, remain separate tasks. Current research cannot yet support the clinical definition or detection of deception with fMRI.

Principles of functional magnetic resonance imaging

The primary advantage of fMRI that has permitted its success is its ability to non-invasively observe patterns of neuron activation in the entire brain with unprecedented resolution and accuracy. Unlike previous dynamic imaging techniques, which required the injection of a radioactive contrast agent and were thus limited in the number of scans they could perform without harmful overexposure to radionuclides for the subject (DiGirolamo 2001), fMRI collects data from entirely natural bodily processes. When a neural region becomes relatively more active due to sensory or cognitive stimulation, the neurovascular system increases oxygenated blood flow to the region in question to aid the neurons' hightened metabolism.

This oxygenated blood possesses different magnetic properties from deoxygenated blood. When the spins of the hemoglobin nuclei aligned to the magnetic field are disturbed by a burst of radio waves, they release measurable electromagnetic signals as they return to their original alignments, a process called precession. Deoxygenated blood contains differently structured hemoglobin than oxygenated blood, so its precession appears at a different frequency from that of its oxygenated counterpart. Thus, fMRI dynamically identifies neuronal activity by detecting the regional changes between oxygenated and deoxygenated blood—blood oxygen level-dependent (BOLD) measurement (DiGirolamo 2001).

Knowing only the locations of activity in the brain during stimulation, though interesting, is insufficient for the study of complex behaviors such as deception. Skeptical scientists have challenged the utility of locational data derived from fMRI, with the extremes of criticism charging fMRI of being a mere sophisticated rehashing of the pseudo-scientific observations of phrenology popular in the late 19th and early 20th centuries, which purported to measure the bumps of the skull associated with various 'regions' of the mind that supposedly indicated certain personal and social qualities (Donaldson 2004).

A properly designed experiment using fMRI, however, does not just identify 'where' activity is occurring. It can also answer 'why' the activity is occurring. Proper temporal measurement of activity can identify functional differences in the role an activated region plays in a cognitive process. Separating trial-related processing, represented by a 'spike' in activity, from task-related processing, represented by a sustained elevation of activity, allows experimenters to 'parse' the patterns of regional activation they identify, giving a multi-dimensional structure to their observations. By both 'mapping' and 'parsing' the brain, reliable conclusions about the role of a particular region of the brain in a cognitive process may be drawn, and even expanded on to include the interactions and comparisons of multiple regions that are the standard for complex behaviors such as deception (Donaldson 2004).

fMRI investigation into 'truthful' and 'deceptive' neural activation patterns

Mixed measurements combining locational, temporal, and intensity data allowed the first glimpses into the mechanics of deception. A study conducted by Spence and others (2001) examined the inhibition of 'truthful' responses theoretically necessary for lying. Twenty-three subjects were asked questions by a computer in the presence of an investigator to which they responded with an affirmative or negative answer by pressing the respective computer key. In addition to the questions, the computer presented a color code unknown to the investigator that informed the subjects whether they should lie or tell the truth. Each question was presented twice, in order to compare neural activity for both truthful response and deception.

The results of the experiment demonstrated a greater activity during the telling of 'lies' in the bilateral, ventrolateral, prefrontal, and anterior cingulate cortices of the brain, areas implicated in the inhibition of immediate responses, with no comparable greater activation during the telling of 'truths.' In addition, Spence and others identified a ca. 200 ms delay for a deceptive response in comparison to a truthful response for subjects questioned both within an fMRI scanner and without. This implies that truth telling is a 'baseline' activity, with the telling of lies associated with greater activation, suggesting the involvement of executive function, a category of cognitive function that includes behavior initiation and inhibition, problem solving, and the manipulation of useful data stored in conscious working memory.

A subsequent study by Langleben and others (2002) confirmed truth-telling as a baseline activity and noted specific patterns of neural activation identifiable with deception. The team hypothesized that fMRI could detect neural correlates specific to deception by increased activity in the anterior cingulate cortex, the superior frontal gyrus, and the left premotor, motor, and anterior parietal cortex—areas associated with executive control.

The experiment made use of the Guilty Knowledge Test (GKT), a method of interrogation by polygraph to detect whether a suspect possesses incriminating knowledge of crime scene details that has been adapted to psychophysiological research in the lab. Subjects were presented with a series of cards and asked questions about their content, with 'Truth,' 'Non-Target,' and 'Lie' cards asking whether the subject possessed the card shown (Non-Target response was negative), and one 'Control' card (the Ten of Spades) asking whether it was the Ten of Spades, in order to combat habituation.

Two clusters of significant BOLD signal increases were identified with deception in the results, located in the areas of the right superior frontal gyrus and the prefrontal to dorsal premotor cortex, including the anterior parietal cortex. No areas showed significant decrease in activity. Once again, no comparable increases were identified with truthful responses, providing further evidence for truth-telling as a baseline state. The regions identified with deception confirmed an fMRI detectable neurophysiological difference between deception and truth at the brain activation level.

Mechanics and limitations of identifying deception with fMRI

Although the 2002 study of Langleben and others identified where activation during deception occurs, and thus demonstrated an observable difference between telling a lie and telling a truth, they did not address how this difference arises. Another review led by Spence and others (2004a) sought to clarify this point, theorizing that deception was an executive function. Executive functions may access awareness, but they do not necessarily have to be conscious. Nonetheless, executive functions predicate increased activation as detected by an fMRI, traceable to the prefrontal cortex region of the brain.

The correlation between deception and BOLD signal detections within the prefrontal cortex across multiple studies (Spence 2001, Langleben 2002) suggests that deception belongs to the category of executive function, requiring among other tasks the inhibition of truthful response.

The discovery and confirmation of specific regions within the brain activated by deception might tempt interpretations of the data as means to root out lies from truths, but important distinctions must be drawn between identifying and detecting deception in the human mind. In a study seeking to replicate and improve upon the neural correlates of deception identified by Langleben and others (Kozel 2004), researchers found both discouraging and encouraging results.

Though a higher MRI field strength of 3.0 Telsas revealed heightened activation correlates through statistical analysis of the study group as a whole, the findings were not significant enough to isolate deception at the level of the individual. In other words, the results could positively identify five regions of the brain associated with deception across a range of ten subjects, but no single subject showed an activation pattern consistent enough to allow researchers to draw conclusions about the neural correlates of deception at the individual level.

Researchers were also unable to identify a specific cause for the variation across subjects. Unless this extensive individual variability were reduced, the activation patterns observed at the group level could not be used to detect deception. Advances in the fMRI study of basic deception, though considerable, do not yet offer a new 'lie detector' for practical use.

Individual variability is not the only factor complicating the application of fMRI to the identification and detection of deception. A study by Ganis and others (2003) sought to challenge the assumption that there is only one type of deception associated with one set of neural correlates. For this purpose, the experimenters directed participants to answer questions about matters like work experience or a family vacation deceptively, not only with spontaneously produced answers, but also with previously-memorized content. They theorized that lies memorized in advance would rely upon brain regions associated with episodic memory, the recollection of specific experiences, while spontaneous lies would rely upon regions associated with semantic memory, the recollection of abstract knowledge.

Their results confirmed that the neural activation patterns were modulated by the type of lie subjects were telling, with spontaneous lies accessing the bilateral superior and cerebellum region in addition to the right cuneus (associated with the retrieval of visual imagery). The only significantly greater activation associated with pre-memorized scenario lies occurred in the right inferior Brodmann area 10, associated with the retrieval of episodic memories.

Conclusion

The preponderence of fMRI data analyzing complex cognitive processes over the past half-decade has confirmed an identifiable pattern of neural activation associated with deception in a laboratory situation. Significant limitations and deficits of understanding still stand between these discoveries and their application to 'lie detection.' Importantly, studies thus far have not detected any specific pattern of activation signifying 'truth,' only that higher activation correlated with certain regions of the brain accompanies deception. Statistical power cannot currently isolate this data to the individual level, nor can it reveal any information beyond the scope of current experimental conditions, which have drawn a simplistic and somewhat artificial distinction between 'lie' and 'truth' that may not hold in a real world situation (Spence 2004a).

The addition of practical concerns such as the relatively high cost of obtaining fMRI measurements and the delicate calibrations necessary to ensure no outside factor such as excess movement contaminate a scan ensures that the use of fMRI as a tool to detect deception remains currently infeasible, though future research may eventually allow the possibility (Robinson 2004).

A further clinical usage of fMRI in the diagnosis of hysteria disorders in which subjects feel their limbs to be paralyzed though no physiological damage hinders movement has been suggested (Spence 2004b), however this too is currently beyond reach. No study has yet taken into account the factors that could distinguish feigned physical symptoms from those associated with the psychological condition of conversion disorder—for example, the added factors of anxiety, neurological changes that might be associated with pathological lying or other mental conditions that could encourage malingering (Spence 2004a).

Worth final consideration concerning the use of fMRI to define or detect deception is the issue of neuroethics. Deception, as a behavior with complex moral implications, is a territory fraught with hazards in defining what is 'normal' or 'abnormal' brain patterns and what these physiological signs imply about the individual in question (Illes 2003). Being asked questions of moral or judgemental nature while monitored by a machine that 'reads your thoughts' could be a significantly distressing experience for many.

Detecting deception in the court of law invites complex questions of legality. Is forced testimony analyzed by an 'infallible' lie detector a violation of the Fifth Amendment, prohibiting coerced self-incrimination? (Robinson 2004). And what would be the standards of admissibility for fMRI evidence in court, taking into consideration the individual variation inherent to patterns of neural activation? With these concerns in mind, it seems that recognizing and publicizing the current weaknesses of fMRI analysis of deception as well as its successes is important for maintaining an accurate public awareness of the capabilities of fMRI to explain the complex workings of the human mind.


Works Cited

DiGirolamo G, Clegg B. 2001. Brain Imaging: Observing Ongoing Neural Activity. Encyclopedia of Life Sciences. http://www.els.net

Donaldson, D. 2004. Parsing brain activity with fMRI and mixed designs: what kind of a state is neuroimaging in?. Trends in Neuroscience 27(8):442-444.

Ganis G, Kosslyn S, Stose S, Thompson W, Yurgelun-Todd D. 2003. Neural Correlates of Different Types of Deception: An fMRI Investigation. Cerebral Cortex 13:830-836.

Illes J. 2003. Neuroethics in a New Era of Neuroimaging. American Journal of Neuroradiology 24(9):1739-1741.

Knight J. 2004. The truth about lying. Nature 428:692-694.

Kozel F, Padget T, George M. 2004. A Replication Study of the Neural Correlates of Deception. Behav. Neurosci. 118(4):852-856.

Langleben D, Schroeder L, Maldjian J, Gur R, McDonald S, Ragland J, O'Brien C, Childress A. 2002. Brain Activity during Simulated Deception: An Event-Related Functional Magnetic Resonance Study. NeuroImage 15:727-732.

Robinson R. 2004. fMRI Beyond the Clinic: Will It Ever Be Ready for Prime Time?. Public Library of Science Biology 2(6):0715-0717.

Spence S, Farrow T, Herford A, Wilkinson I, Zheng Y, Woodruff P. 2001. Behavioural and functional anatomical correlates of deception in humans. NeuroReport 12:2849-2853.

Spence S, Hunter M, Farrow T, Green R, Leung D, Hughes C, Ganesan V. 2004a. A cognitive neurobiological account of deception: evidence from functional neuroimaging. Phil Trans R Soc Lond B 359:1755-1762.

Spence S. 2004b. The deceptive brain. J R Soc Med 97:6-9.