Suppose that Z represents the number of writeups on E2 in a typical day, and that it is a function of Y, representing the number of noders, and X, representing the number of editors with nuke powers. (This is a deliberate oversimplification of how E2 works, but if you think about it, it's not entirely without merit.) If the number of noders increases while the number of editors remains constant, then the number of writeups increases; if the number of editors increases while the number of noders remains constant, then the number of writeups decreases.

If the number of noders is relatively low, then the number of nodes will decrease more-or-less linearly as the number of editors increases:

Z  ___                             
|     \___                         
|         \__                      
|            \__                   
|               \__                
|                  \_              
|                    \__           
|                       \__        
|                          \___    
|                              \___
|___________________________________ X

As the number of noders increases, however, something happens. Possibly the average quality of nodes decreases, so as the number of editors increases, they'll find more stuff to nuke and the number of writeups will drop off more quickly.

|  _______                         
|         \___                     
|             \__                  
|                \_                
|                  \               
|                   |              
|                   |              
|                    \_            
|                      \__         
|                         \___     
|                             \____
|___________________________________ X

At a certain number of editors, the graph becomes unstable. This might be because the larger pool of editors are agreeing that higher standards of quality need to be enforced, for instance. The result of the higher standards is that fewer editors can justifiably nuke more nodes. However, even with the higher standards in place, there are still a minimum number of editors needed to perform a certain number of nukes. Thus a continued increase in the number of nukes (decrease in the number of nodes) requires an increase in the number of editors again. The result is that the graph now folds over on itself:

|  _________                       
|           \_____                 
|                 \____            
|                      \           
|                       |          
|                     _/           
|                   _/             
|                 _/               
|                /                 
|               |                  
|                \____             
|                     \_____       
|                           \______
|___________________________________ X

We just explained what happened if you wanted to gradually increase the number of nukes: you need to decrease the number of editors, then increase them again. But what happens if you continually increase the number of editors, moving from left to right across the graph? Then you'll reach a point where the number of writeups suddenly and unexpectedly plummets due to the unstable nature of the graph. This sudden drop is what René Thom called a catastrophe. (Similarly, if you were to steadily decrease the number of editors, you'd reach a point where the number of writeups would suddenly increase as the number of nukes suddenly drops.)

If you graphed all three variables in three dimensions, it would look something like this:

             /     \______
            /             \_______
           /                     /
          /      cusp point     /
         /         /|          /
        /         / /         /
       /___      / |         /
           \____/ /         /
             /  \ fold point
            /___/         /
fold point /             /
           \____        /
                           |____ X

The fold points are where catastrophes occur, representing local maxima and minima, and they form two curved lines along the graph. The cusp point is the point where both fold lines intersect. Envision the current state of all three variables not as a point on the graph, but as a slightly sticky ball rolling along the surface of it. The ball will adhere to the top of the surface as you roll it up and down the graph, but when it reaches a fold point, gravity will overcome its stickiness and it will fall to the lower part of the surface.

René Thom developed the mathematics of catastrophe theory (not to be confused with the religious doctrine of catastrophism) in the 1960s and early 1970s as a solution, of all things, to problems in developmental biology. It was (and still is) a mystery exactly how an embryo of identical, undifferentiated cells can turn into an organism with differentiated lung cells, blood cells, heart cells, brain cells, skin cells and muscle cells. Morphogens, chemicals that activate or deactivate certain parts of the cell's DNA, are believed to be responsible, but it's unknown how the morphogens appear in specific parts of the embryo during development, let alone how they produce such radically different cells side-by-side.

Thom believed that catastrophe theory could present a solution. By representing the locations of morphogens as input variables for a multidimensional function (many more than the three in the graph above), the state of each cell could be represented as a stable part on a folding graph. The catastrophes on the graph would represent places in the embryo where, for instance, muscle cells became nerve cells or skin cells. However, catastrophe theory is just a mathematical model for any type of system. Eventually, Thom and other mathematicians saw implications in understanding the sudden unexpected changes that occur in engineering and physics, and even sociology and politics.

In January, 1976 Newsweek magazine presented a rosy full-page article on some of these implications of catastrophe theory. A backlash occurred within the mathematical community, deriding it as not being able to predict catastrophes nearly as well as it thought it could, especially social and political ones. But those models involve simplifying an extremely complex environment to a handful of variables, something that cannot be done if prediction is the goal. Catastrophe theory can, however, be used to model those environments and explain sudden changes after they've happened.

In order to predict catastrophes instead of just modelling them, you would need a very simple environment with a limited number of variables -- such as you might have in physics or engineering. A horizontal load-bearing beam, for instance, can be modelled according to the amount of horizontal force placed on either end as well as the amount of vertical force on the middle of the beam. The upward or downward displacement of this beam would be the output of a function with two inputs (the horizontal and vertical forces). If you increase the horizontal force on the beam with no vertical force, the beam will buckle upwards; if you add vertical force by placing a load on the beam, it will bend downwards. (You can represent this at your desk with a strip of paper and some paperclips.) However, if the beam is buckled very far upwards and the load is gradually increased, the beam will eventually collapse under the rising weight. Catastrophe theory can be used to model when these collapses will occur and help prevent them.

The core of catastrophe theory is the Classification Theorem, which says that if you have a stable system with no more than six inputs and no more than two outputs, its graph must be equivalent to a particular class. If you have two inputs and one output, then the graph of the system must resemble the "cusp catastrophe" illustrated above. If the system does not have a cusp, then that system is unstable and it will, eventually, change into the stable cusp catastrophe. The points in that cusp graph are not all stable, of course; we just pointed out where the unstable catastrophes lie. But the system that produces that graph is stable. (Philosophically, this means that there will always be a little instability in nature and in life. But that doesn't really surprise anyone, does it?)

If you change the number of inputs or outputs, the geometry of the graph changes as well. For instance, if there is only one input (instead of two) and one output, the graph contains only one fold. Two inputs produces the cusp, three generates a "swallowtail", and four yields a "butterfly." (Adding more inputs or outputs gets more complicated, not to mention harder to visualize, and the cute nicknames must be abandoned.) Each one of these systems will be represented by some known class of graph, as long as it's stable, which means that we know where we can find the stable and unstable values of that system.

As mentioned above, catastrophe theory isn't any good in a system where you can't clearly define the input and output variables for your mathematical model. Moreover, the Classification Theorem only says that the graph of the system will conform to a certain class of graph; it doesn't say exactly what values that graph will take on. Areas of the graph very far from the catastrophic folds and cusps may take on any number of stable shapes, so unless the system's state is somewhere near a catastrophe, it's impossible to say what changes might occur. Finally, catastrophe graphs are only useful if the system tends toward stable fixed points. If it possesses stable periodic cycles or strange attractors, as many natural systems do, catastrophe theory is unable to model those.

So it's not useful for every type of system. But for those where it is useful, catastrophe theory can provide powerful and immediate insights about their stability and instabilities.