This is Everything2's science writing group, existing to encourage, facilitate and organise the writing and discussion of science on this site. Members are usually willing to give feedback on any writing on scientific topics, answer questions and correct mistakes.

The E2_Science joint user is here make it easier to collectively edit and maintain the various indexes of science topics: Scientists and all its sub-indexes, physics, astronomy, biochemistry and protein. More may follow. It also collects various useful links on its home node.

Note that there is also a separate e^2 usergroup for the discussion of specifically mathematical topics.


Venerable members of this group:

Oolong@+, CapnTrippy, enth, Professor Pi, RainDropUp, Razhumikin, Anark, The Alchemist, tom f, charlie_b, ariels, esapersona, Siobhan, Tiefling, rdude, liveforever, Catchpole, Blush Response, Serjeant's Muse, pimephalis, BaronWR, abiessu, melknia, IWhoSawTheFace, 10998521, sloebertje, getha, siren, pjd, dgrnx, flyingroc, althorrat, elem_125, DoctorX, RPGeek, redbaker, unperson, Iguanaonastick, Taliesin's Muse, Zarkonnen, SharQ, Calast, idan, heppigirl, The Lush, ncc05, Lifix, Akchizar, Palpz, Two Sheds, Gorgonzola, SciPhi, SyntaxVorlon, Redalien, Berek, fallensparks, GunpowderGreen, dichotomyboi, sehrgut, cordyceps, maverickmath, eien_meru, museman, cpt_ahab, mcd, Pandeism Fish, corvus, decoy hunches, Stuart$+, raincomplex, Tem42@
This group of 71 members is led by Oolong@+


...brute force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space".

---Bruce Schneier in
Applied Cryptography

As we all know, it has proven to be very foolish to make predictions about system's security (see also my wu "Never assume your system is undefeatable"). But the fact is that here, within his statement, Bruce Schneier is implying that there is some kind of an inherent property of matter, something like a law of nature that makes brute force attacks against 256-bit keys infeasible, and not something about the technology of the devices we use nowadays or the ones that we will be using after 350 years.

One may find clever and sly ways to brute-force keys. Some known ways include the Chinese Lottery which, though never actually practiced, would perfectly work: The Chinese government instructs all Chinese factories which produce TVs to integrate a brute-force, million-test-per-second cracking chip in every TV. When the Chinese government wants to break a key, they would broadcast it through airwaves to all TV sets and the numbers say that after about 20 hours, the key would have been recovered (with very moderate assumptions about the percentage of people which own a TV).

But whatever one may come up with, there are some things that seem to be very unlikely to change. Such are the laws of thermodynamics and especially the second one. One of the implications of the second law of thermodynamics is that information requires energy in order to be represented. So, if we have a system (a "computer" in our language) and we want it to record the value of a single 'bit', we need energy at least kT, where 'k' is the Boltzman constant (k = 1.38*10-23 Joule/oK) and 'T' is the system temperature in Kelvin. This is calculated using the equation which connects the energy with the temperature of a system (E = kT). Supposing that our ideal computer would work at the average temperature of the universe (3.2 oK) (so that it does not require cooling), the required energy to record a bit would be around 4.416*10-23 Joule.

Now, to make things clear, let's play a bit (pun intended ;-) with numbers. The annual amount of energy that our sun radiates is about 1.21*1034 Joule. Dividing with the per bit-change energy, we calculate that using the whole energy of our sun for a year, we could provide power for our ideal computer to perform 2.74*1056 bit changes. This is enough to have a 187-bit counter go through all its states. Just think... having an ideal computer, working at the freezing temperature of 3.2 Kelvin, using all the energy of our sun for a year... and all that for just having a 187-bit counter go through its states, let alone for making the intense computations that each of these states requires to test the key...

Even sucking all the energy from a supernova would be just enough to pass through all states of a 219-bit counter... So, it is clear that a 256-bit key (which, just to be represented while we brute-force it on our ideal computer, would require the energy that 400.000.000.000.000.000.000 suns like our sun radiate in a year...) seems errrr... kinda difficult to brute-force...

And the amazing thing that we must keep in mind, is that all of the above calculations are completely independent with contemporary and future technology advances... As long as computers are made of matter, 256-bit keys will be secure against brute-force. Except of course... if we break the second law of thermodynamics :-). But even then, who would give a damn if 256-bit keys were rendered insecure, when we would have defeated the most persistent law of nature?

Clues taken from:
Applied Cryptography 2nd edition, Bruce Schneier
The Universal Constants, Gilles Cohen-Tannoudji

The Superposition Principle

In Physics, the principle of superposition says roughly that if two particular sorts of behavior are allowed in a physical system separately, then if you try to cause both simultaneously, the result will be the sum of the two individual behaviors (where sum is defined in a sensible way, mathematically) called the superposition of the two. This principle is not true in all (or perhaps even most) physical systems, but it is often at least approximately true1. The "principle" as I've just stated it undoubtedly sounds vague. That is because it is a very general property that exhibits itself in many different sorts of systems, which are then said to "obey the superposition principle". We can make the definition more exact by relating it to the mathematical idea of linearity, which I discuss in the last section.

When it applies, the superposition principle is extremely useful, because it allows us to take a complex situation and think of it as the sum of many, simpler parts. By being able to break down a complex problem into many simple problems, we are able to understand a lot more than we otherwise would be able to. In fact, most of our understanding of the physical world concerns situations where the superposition principle is true, and we are only recently starting to understand systems without the superposition, in the fields of nonlinear dynamics and chaos.

Now let's talk about a specific example of superposition to make this idea more concrete.

An Example of Superposition

The best example I can think of to illustrate the idea of superposition here is that of waves traveling on an idealized string. If this string obeys the superposition principle, and you can make two different wave packets (meaning two waves of certain shapes), A and B, on the string, then the principle tells you that when the wave packets overlap the result will just be as though one wave packet was stacked on top of the other one; the height of the result is the sum of the heights of the individual wave forms, in other words. At that point the two wave packets are said to be "superposed"2. Furthermore, once they no longer overlap, each wave packet will continue on as if the other were never there. Here is a crude picture of what this would look like using a series of "freeze frames" at successive times:


  A  ______                                       B
    |      | --->                        <---  /\
    |      |                                  /  \
____|      |_________________________________/    \_____



                      _____/|
                     |      | /\
                     |      |/  \
_____________________|           \_______________________



                            /|
                           / |
                       ___/  |
                      |       \
                      |        \
______________________|         \________________________



                          /\
                         /  \
                        /    \ 
                       |      |
                       |      |
_______________________|      |__________________________



                          |\
                          | \ 
                          |  \____
                         /       |
                        /        |
_______________________/         |_______________________



                           |\_____
                        /\ |      |
                       /  \|      |
______________________/           |______________________



    B                                           ______  A
 <--- /\                                       |      | --->
     /  \                                      |      |
____/    \_____________________________________|      |____

I hope that diagram is clear. Please /msg me if it isn't and try to be as specific as possible about what confuses you. The first and last diagrams are supposed to be long before and after the two wave packets cross.

Applicability of Superposition

Perhaps one of the most important examples of where the principle of superposition is exactly true is the theory of electromagnetic fields in vacuum (the classical theory, anyway3). If two different electric and magnetic fields can exist in a certain region, then the sum of those fields is also a possible behavior. Also, if a certain configuration of electric charges and electrical currents cause one field, and a second set of charges and currents cause a different field, then if both sets of charges and currents are present it will cause the sum of the two fields. It is this last property that makes the superposition principle particularly useful in studying electromagnetism. When you have a complicated source (some set of currents and charges), you can think of that source as the sum of many simpler sources. You can then figure out the effect of each of those sources separately. Afterward, you can add up the resulting fields to get the total field that the total source will cause. The simpler sources can be point charges, sheets of charge, or other simple charge distributions, and you can think of many similarly simple current sources. This technique can make the situation much easier to figure out. Another approach using the same idea is using a multipole expansion. Even if the source can't be described exactly by a few simple sources, this will often give a good approximate description.

In many other cases, the superposition principle holds for a good approximate description of the system in question. Most systems4 that have a stable equilibrium configuration will behave linearly for small deviations from that equilibrium[2], and disturbances from equilibrium should stay small (since it's a stable equilibrium), so you can describe them approximately with a theory that obeys the superposition principle as long as the motion does not start out too large. What is "too large" is defined by how far the system can go from equilibrium before nonlinear behavior becomes important, which depends on the details of the system in question. This sort of approximate validity is usually the case for electromagnetic fields inside of a medium, like glass or water. The superposition principle will be true for small field strengths, but for strong fields it will not hold true.

One example of a situation in which the superposition principle holds approximately is an electromagnet with an iron core. Putting electrical current into the electromagnet causes an electromagnetic field in the core. The core, which is often a ferromagnetic material, is there because the field caused in it by the current is stronger than the field that would be caused in empty space. If you add two small currents in the electromagnet, the resulting field will be approximately the sum of the fields that would have been caused by each current alone, obeying the superposition principle; however, if you add a large current the increase in magnetic field within the core will not be as large as you would expect from the superposition principle, because eventually the iron core begins reach "saturation". This is the point where nonlinearities in the behavior of the core have become important.5

The most common context in which to discuss superposition is the study of waves. A linear wave theory is one that describes waves in a system in which the superposition principle holds exactly or at least approximately. Systems described by a linear wave theory include electromagnetic waves (a.k.a. light), gravitational waves, sound waves, and mechanical vibrations. Waves occur in a system that is close to stable equilibrium, so as I discussed before this often means that the superposition principle will hold true as long as the waves are small enough. When we have the superposition principle to rely on, even the motion of complicated wave packets can be understood, because we can understand them as nothing more than the sum of many simple, periodic waves. As long as we can understand how the simple waves behave, then we can add up the results to understand how the complex wave packet behaves. When two different waves come together, the result of superposing the two waves is usually called interference. In systems that only approximately obey the superposition principle, when the waves get too large and we no longer have simple superpositions, we enter the realm of waves in nonlinear media, which are much more difficult to understand and predict. Most of the time at a high school or undergraduate level you will only ever talk about waves in systems that obey the superposition principle, because those ones can be understood so much more simply and completely.

The superposition principle is useful to understand a huge variety of systems. The reason it's so useful is that it allows us to break down a complex problem into many simple problems and solve it piece by piece, since we can get the result just by adding up the results of the individual pieces. Practically all widely studied, well understood systems obey the superposition principle; the concept of superpositon is central to most of the study of Physics. Systems that don't are said to be "nonlinear" and are studied in nonlinear dynamics and chaos, but usually you can't say nearly as much about them as you can about linear systems with superposition. Superposition is the consequence of a mathematical property called linearity, which I'll finish up by describing.

Mathematically Defining Superposition

I've said that systems that obey the superposition principle are called linear systems. One way this is often defined mathematically is to say that the quantities of the system you're interested in behave according to an equation of motion that is a linear differential equation:

an(t)*(dny/dtn) + … + a2(t)*(d2y/dt2) + a1(t)*(dy/dt) + a0(t)*y = 0

where dny/dtn is the nth derivative of y(t) written in Leibniz notation. The important feature of that equation is that if a function yB(t) is a solution and a function yC(t) is a solution then the function yD(t) = yB(t) + yC(t) is also a solution. yD is the superposition of yB and yC. While the above is an ODE, it could just as well have been a PDE. To put it more generally and formally, a linear system is one in which the solutions (the possible behaviors) form a vector space. This is true of the solutions of the differential equation above.


  1. How can something be approximately true? Isn't truth a binary thing? What I mean here is that you can use a model which obeys the superposition principle as a quantitatively good approximate description of the system.
  2. We say "superpose" to mean adding one to the other in this certain way, as opposed to if they were "superimposed", which would just be drawing one on top of the other.
  3. In Quantum Electrodynamics, higher order loop corrections add photon-photon interactions which make even the vacuum nonlinear[1]. So, for very strong fields, we would no longer have the superposition principle for EM fields. Still, it is a very good approximation of the true behavior in most situations. Also, we still would have the superposition principle for the Hilbert space of the quantum system.
  4. Generally here I'm speaking of a conservative system where the second derivatives of the potential energy at equilibrium are non-vanishing. A system where the quadratic part of the energy vanishes at equilibrium will be approximately quartic near equilibrium and, thus, have different behavior.
  5. In actuality, nonlinearities may be important well before saturation, unless the magnetic susceptibility curve is very straight. But in all cases, the core will reach saturation eventually and nonlinear effects are definitely important there.

Sources:

  1. Phys. Rev. D 2, 2341 (1970)
  2. Goldstein, Poole, and Safko, Classical Mechanics 3rd Ed. (Addison Wesley, San Fransisco, 2002)
  3. Howard Georgi, The Physics of Waves (Prentice Hall, Englewood Cliffs, New Jersey, 1993)
  4. Serway, Physics (Suanders College Publishing, Philadelphia, 1996)

Avogadro's number ( NA ) is a fundamental physical constant used to convert moles of a substance to a quantity of particles which make up that substance. The closest known approximation of its value is 6.022 141 5x1023 mol-1 (with an uncertainty of 0.000 001 0x1023).

Its most common application is in finding the number of atoms in n grams of an element having n atomic mass units.

                                     grams
Number of atoms of an element = --------------- x Avogadro's Number
                                 atomic weight

Number of atoms of an element = moles x Avogadro's Number

Of course, its use is not limited to elements. To find the number of molecules of a substance, use the relative formula mass (add up all the atomic masses of the elements that make up the molecule). It is number of molecules (or atoms) of a substance contained in an amount of a given substance whose mass (in grams) is numerically equal to the atomic weight of the substance. More generally, it is the number of particles in one mole of anything. A mole of pennies, for example, is enough money to buy everyone in the world 50 million cars.

Formal Definition

Specifically, Avogadro's number is defined as the number of 12carbon atoms which make up 0.012 kilograms of carbon. If this sounds like a circuitous way of arriving at a value, it is. The more direct quantity of 1hydrogen atoms which make up one gram of hydrogen was deemed inappropriate because of the relatively large amounts of deuterium ( 2hydrogen ) contaminating hydrogen samples. Carbon has a more constant isotopic composition, and very pure samples of 12carbon are relatively easy to produce. 12Carbon is the standard by which all other atomic masses are measured, being defined as exactly 12 amu.

It should be noted, however, that a mole of 12carbon weighs less than six moles of protons plus six moles of neutrons. For an explanation of why, see mass defect.

Avogadro's Law

Count Lorenzo Romano Amedeo Carlo Avogadro di Quaregna e Cerreto (1776–1856) actually had little to do directly with the number which bears his name. However, the work that he did in the field of chemistry was important enough in calculating it that it was named in his honor. It was first calculated after Avogadro's death, in 1865, by Johann Josef Loschmidt using kinetic gas theory, which is based on Avogadro's Law: "equal volumes of all [ideal] gases at the same temperature and pressure contain the same number of molecules".

Calculating Avogadro's Number

The process of calculating Avogadro's number is complex. Fortunately many of these steps have already been done for us, such as building the Periodic Table of the Elements.

  1. Find the volumes of elemental gasses necessary to combine to form various known compounds. For example, N2O is made from 2 parts N to 1 part O by volume. NO is made from 1 part N to 1 part O by volume. Joseph Louis Gay-Lussac (1778–1850) has done this step for us. No, I don't know how he knew it was N2O before they built the periodic table.

  2. Realize that this may mean that equal volumes of gases at the same temperature and pressure contain equal numbers of particles. This was Avogadro's Hypothesis. It was later confirmed for ideal gasses and is now known as Avogadro's Law.

  3. Use Avogadro's Law to determine the atomic mass ratio of two equal volume samples of elemental gasses at STP. For example, 1L of hydrogen masses 0.08g and 1L of nitrogen masses 1.12g. Therefore, because Avogadro's Law states that 1L of hydrogen and 1L of nitrogen contain an equal number of atoms at STP, we know that an atom of nitrogen is 14 times heavier (1.12g/0.08g) than an atom of hydrogen.

  4. Since hydrogen is the lightest substance known, assign it a value of 1 and all other elements a value equal to its mass relative to hydrogen. We now have our periodic table of the elements.

  5. Define a "mole" as the quantity of 12carbon necessary to obtain 0.012kg. Define Avogadro's number as the number of atoms of 12carbon this is.

  6. Use X-ray crystallography to calculate the number of titanium atoms in one mole (kids, get your parents to help you with this step).

Titanium's crystalline structure is made of body-centered cubic unit cells having an edge length of 330.6 pm. The density of titanium is 4.401 g/cm3. There are 47.88 grams of titanium in one mole. Therefore:

 2 atoms Ti        1 unit cell       47.88g Ti        1cm3
------------ x ------------------ x ----------- x ----------- = 6.02x1023 atoms of Ti/mole
 unit cell      (3.306x10-8)3cm3        mole        4.401g Ti

The National Institute of Standards and Technology (NIST) used this procedure with silicon crystals to obtain the current standard value of Avogadro's number used today.

Sources:
http://gemini.tntech.edu/~tfurtsch/scihist/avogadro.htm
Herron, Kukla, Schrader, Erickson, and DiSpezio. Chemistry. Lexington, Massachusetts: Heath and Company, 1987.

† Original definition by datagirl.

Thomas Kuhn challenged the positivist ideal that science is an objective means of understanding the world around us. Science, in Kuhn's view, is intrinsically conservative, and reflects the vested interests of established scientists to maintain a status quo, to prevent their ideas from becoming obsolete. Therefore, like anybody, scientists inherit what are seen as "common sense" views of the world (what Alvin Gouldner referred to "domain assumptions"). For Kuhn, scientific belief was made up of socially constructed paradigms, which govern any research that takes place. Scientists, particularly young, unknown ones, are expected to conform to the dominant paradigm. This is why when a child in a school chemistry laboratory finds the boiling point of water to be something other than 100°C, he or she is informed by their teacher that they are wrong. Of course, to anyone living in the current paradigm, the notion that the child might be right seems ridiculous, and their findings are dismissed as bad science. Let us consider some of the dominant paradigms throughout history:

  • The Earth is a flat plane.
  • The Earth is the object around which the Universe rotates.
  • People with black skin colour are genetically less evolved than people with white skin colour.
  • People can never leave the Earth.

Therefore, it is conceivable that years from now, people will be chortling at the memory of those noble savages who thought that people could not travel at the speed of light, that Jupiter is the largest planet in our solar system, that water boils at 100°C. Kuhn argues that paradigms are only broken down when anomalies (evidence that contradicts the paradigm) becomes too obvious to overlook. For example, some may have felt they could ignore Galileo Galilei and his telescope, but when someone circumnavigated the globe, people had to take notice. This is a paradigm shift.

Length contraction, also known as Lorentz contraction, is a physical consequence of Einstein's special theory of relativity. The basic interpretation is that moving objects are observed to be contracted in length. In order to understand why this is the case, we'll need to understand a few premises:

1) The speed of light is the same in all reference frames.

2) The progression of time is relative from one reference frame to another, but we can calculate the exact ratio by which time is dilated (I recommend reading about time dilation for details).

Given these two premises, we can devise a system for measuring length which transforms properly from one reference frame to another. This method is similar to the light-clock from the time dilation node, but essentially works "backwards". By measuring the travel time for a beam of light, we should be able to accurately calculate the distance it travels. Since we know the speed of light is the same in different reference frames, and we know exactly how time transforms from one reference frame to another, we should be able to easily calculate how length transforms from one reference frame to another:

|                     |
|<<<<<<<<<<<<<<<<<<<<<|
|>>>>>>>>>>>>>>>>>>>>>|
|                     |
This is a crude model for our "ruler", which is really nothing more than two mirrors. The length between the two mirrors is determined by measuring the amount of time a beam of light takes to make the round trip:

2d0 = cΔt0
Or, Δt0 = 2d0/c

Now that we have a system for measuring distance, let's see what happens when we look at it from a different reference frame. If we put our "ruler" on a train moving along the ground (to the right), we notice from the ground that because the train is moving, the far side of the car (right side) will have changed its position by the time the light reaches it, lengthening the travel time. Likewise, on the return trip, the near side of the car (left side) will move, shortening the travel time.

 |            |                |
 |            |<<<<<<<<<<<<<<<<|
 |>>>>>>>>>>>>|>>>>>>>>>>>>>>>>|
 |            |                |
What we need now is to mathematically relate our previous d0 with our new measured distance, d, using what we know about the changes in travel time.

The new travel times Δt1 and Δt2 can be calculated by simply adding the motion of the train to the distance in the equation above:

Δt1 = (d + vΔt1)/c
Δt2 = (d - vΔt2)/c

Δt1 = d/(c-v)
Δt2 = d/(c+v)

The total time for the round trip, Δt:

Δt = Δt1 + Δt2
= d(1/(c-v) + 1/(c+v))
= d(2c/(c2 - v2))
Δt = (d/c)(2/(1-v2/c2))

Now, we need a formula which relates Δt0 to Δt. This we can determine from time dilation, the calculation of which has been done in the time dilation node. The net result of this was:

Δt = Δt0/√(1-v2/c2)

Using this equation, we can see that

Δt0 = (2d/c)(1/√(1-v2/c2))

Finally, we put it all together with our first equation for Δt0, and find that

d0 = d/√(1-v2/c2).

Distances in moving reference frames are contracted along the direction of motion. Thus, a meterstick on the train parallel to the track would be shorter than a meterstick on the ground, from the perspective of an observer on the ground.

Now we see a paradox analogous to the twin paradox of time dilation. If all observers are on equal standing, we could do this same experiment, but reversing the roles of the observers, and the observer on the train should observe that the ruler on the ground is shorter than his own. Each observer measures that the other's meterstick is shorter. This seems to be contradictory information. How is this resolved?

As in the case with time dilation, the paradox is resolved with the relativity of simultaneity. Let's imagine we have a regular meterstick and want to measure the length of another object with it. How exactly is this measurement carried out? Presumably, one reads off the values on the meterstick at both ends of the object, and subtracts to find the difference. If the object you're measuring is moving, however, one must be sure to measure both ends of the object simultaneously, or else the object will have changed its position on the meterstick between measurements. Here is where the ambiguity comes in. Since simultaneity is relative, the observer on the ground watching the observer on the train carry out his measurement will see him take his measurements at different times, invalidating his measurement of length, from the ground observer's perspective. It's no wonder to the grounded observer that the train-observer's measurement says his meterstick is shorter, since he measured improperly, from his reference frame. Likewise, the observer on the train sees a similar story while watching the ground observer carry out his measurement. Both observers properly calculate that the other's ruler is shorter, and furthermore, they observe that the other carries out his measurement improperly. The moral of the story: all of these observations are relative. For a very interesting and more thorough discussion, read about The Barn And The Pole: A Relativity Paradox .

One final interesting point regarding length contraction revolves around the difference between what one sees and what one observes. In a moving reference frame, an observer with all the laws of physics at his disposal can calculate properly that the length of an object becomes contracted. However, this is only after this observer takes into account the physics of how light rays can get from the object to his field of vision. Such a calculation with all of these concepts taken into account is what we can call an observation. What the observer sees is an entirely different picture. Specifically, instead of seeing an object contracted, the observer sees an object rotated. Here's why:

          ______
-------   |     |
 -------  |     |
-------   |_____|

A cube moves through space at a velocity close to the speed of light. Imagine there is an observer directly beneath the cube, with respect to this picture. The cube's length is contracted:
          _____
-------   |    |
 -------  |    |
-------   |____|

Now, this cube is moving so fast that light from the top right corner of the cube will not be able to get past the bottom corner unless it is moving at an angle with respect to the right face of the cube. The exact angle would be such that tan(θ) = v/c. From the perspective of an observer at this angle, the face appears to be pointed along this trajectory. For the right edge, the opposite happens. Light rays from the top left corner can be pointed into the cube, at this same angle with respect to the face, and it can still be seen by an observer. Thus an observer at right angles with the cube's velocity can easily see all of the left face, but the right face remains hidden. The picture now looks something like this:
          _____
-------   \    \
 -------   \    \
-------     \____\

Now, if we assume that the observer does not have an acute sense of depth perception (we could assume that the cube was too far away to tell the difference between distances on the cube), we can distort the distances so that our picture now looks like:
              _
          __-- \
-------   \     \
 -------   \    _\
-------     \_--

After drudging through all of this ASCII art, I hope that I have shown that it is at least plausible that what one sees is in fact a rotation, even though the actual physical effect is a contraction. The difference is that the rotation is illusory, while the contraction is physical.

In principle, when one takes into account time dilation, length contraction, and the relativity of simultaneity, one unfolds the entirety of Einstein's Special Theory of Relativity.