This is Everything2's science writing group, existing to encourage, facilitate and organise the writing and discussion of science on this site. Members are usually willing to give feedback on any writing on scientific topics, answer questions and correct mistakes.

The E2_Science joint user is here make it easier to collectively edit and maintain the various indexes of science topics: Scientists and all its sub-indexes, physics, astronomy, biochemistry and protein. More may follow. It also collects various useful links on its home node.

Note that there is also a separate e^2 usergroup for the discussion of specifically mathematical topics.


Venerable members of this group:

Oolong@+, CapnTrippy, enth, Professor Pi, RainDropUp, Razhumikin, Anark, The Alchemist, tom f, charlie_b, ariels, esapersona, Siobhan, Tiefling, rdude, liveforever, Catchpole, Blush Response, Serjeant's Muse, pimephalis, BaronWR, abiessu, melknia, IWhoSawTheFace, 10998521, sloebertje, getha, siren, pjd, dgrnx, flyingroc, althorrat, elem_125, DoctorX, RPGeek, redbaker, unperson, Iguanaonastick, Taliesin's Muse, Zarkonnen, SharQ, Calast, idan, heppigirl, The Lush, ncc05, Lifix, Akchizar, Palpz, Two Sheds, Gorgonzola, SciPhi, SyntaxVorlon, Redalien, Berek, fallensparks, GunpowderGreen, dichotomyboi, sehrgut, cordyceps, maverickmath, eien_meru, museman, cpt_ahab, mcd, Pandeism Fish, corvus, decoy hunches, Stuart$+, raincomplex, Tem42@
This group of 71 members is led by Oolong@+

The now classical Hindmarsh & Rose mathematical model of neuronal bursting was first published in peer-reviewed form back in 1984, by a scientific periodical named "Proceedings of The Royal Society of London. Series B, Biological Sciences". (This seminal paper is now only available online via a JSTOR's (probably institutional) subscription, so go check out with your university. Alternatively, go to your university's medical school library periodicals section.)

Hindmarsh & Rose work were initiated by the discovery of a neuronal cell in the brain of the pond snail Lymnae, which was initially "silent" (the molluscan burst neuron had been previously hyperpolarized to stop the bursting), but when depolarized by a short current pulse generated a burst that greatly outlasted the stimulus - i.e., an action potential followed by a slow depolarizing after-potential.

In seeking an explanation for the phenomena also observed in crustaceans and vertebrates, the research collaborators devised a system of 3 coupled (3-variable) first-order differential equations of the following form:

dx/dt = y - f(x) - z - I,
dy/dt = g(x) - y,
dz/dt = r(h1(x) - z),

where

(the first 3 variables listed below are coordinates which represent the states of the dynamical system - a single neuron in this case - varying over time)

x: (neuron) membrane potential,
y: potential of the ionic channels subserving accomodation,
z: the slow adaptation current which moves the voltage in and out of the inherent bistable regime and which terminates spike discharges,
r: the time scale of the slow adaptation current,
I: the applied current,
h1(x): the scale of the influence of the slow dynamics on membrane potential, which determines whether the neuron fires in a tonic or in a burst mode (when exposed to a sustained current input),
f(x): a cubic function,
g(x): not a linear function.

Depending on the values of the above parameters, neurons can be in a steady-state, they can generate a periodic low-frequency repetitive firing, chaotic bursts, or high frequency discharges of action potentials. Despite its inherent single cell description, Hindmarsh & Rose neurons can be linked by introducing equations accounting for electrical and/or chemical junctions which underlie syncronization in experiment material on the cooperative behavior of neurons that arises when cells belonging to large assemblies are coupled with each other. Depending on the degree of coupling between the neurons, such a linkage can lead to out/in phase bursting or to a chaotic behavior.

Early attempts to recreate (micro-)life in silico have begun around 15 years ago. A useful model would suggest a hypothesis that forces the model builder to do an experiment. Take the early effort of Drew Endy of University of California at Berkeley and John Yin of University of Wisconsin-Madison as a landmark. Their computational model incorporates everything we know about the way the T7 bacteriophage virus infects the infamous Escherichia coli. The seemingly impressive simulation includes how all T7's 56 genes translates into 59 proteins, how those subverts the host cell, and even how the viruses would evolve resistance to various RNA-based drugs. Despite including measures from near two decades of experiments, early models fail miserably in that there are still a huge number of degrees of freedom so that they can be tweaked to produce almost any behavior. These models are then just sketchy caricatures based on the traditional gene => RNA => protein basic sequence.

The billions of dollars initially invested in technologies such as sequencing, combinatorial chemistry and robotics haven't paid off as hoped because of the naive idea that you can redirect the cell in a desired way just by sending in a drug that inhibits only one protein. Indeed, you could draw a map of all the components of the simplest single-celled microorganism and put all the connecting arrows and still have absolutely no ability to predict anything.

Since around 10 years ago, some more mathematically-minded biologists have been putting forth an effort to use computer simulations to search for some unifying principles that could order the facts, rather than search for a pretentious single model. I.e., a purely reductionist, top-down approach to simulating cells. For instance, from the currently more sophisticated cell simulations one can argue that robustness is a good candidate to be one of these conjectured emerging universal properties. Knowingly, to survive and prosper (i.e., (self-)reproduce), cells must have backup systems and biological networks that tolerate interferences such as dramatic temperature swings, food supply changes, and toxic chemicals assaults. In this all-important context, virtual experiments run with the japanese E-Cell model, a single-celled "microbe" mostly built from genes borrowed from Mycoplasma genitalium - the smallest genome yet discovered in a self-reproducing life-form - indicate that even with a drastic change in the magnitudes of various genes expressions a cell's behavior can remain practically unchanged. Experiments in which the researcher adjust virtual cells to reflect the activity of a specific drug are revealing that the resulting dramatic changes in cellular state can lead to a very little efficacy on the underlying disease condition.

It has been vividly argued that what most strongly affects how a cell behaves in response to a drug or disease is not any manipulation of a particular gene or protein, but how all the genes and proteins interact dynamically - i.e., the story emerges from the links, which shift over time. As we know hardly anything about most biochemical systems, some modellers are taking an engineering approach by figuring out the basic laws the cell's behavior must obey. Perhaps the most famous example of this approach, pioneered by computer science demigod John R. Koza, is that of a set of programs genetically evolving to match entire actual reaction networks. As measured data on how cells process chemicals over time are piling up, this evolutionary approach could one day be used even to deduce the convoluted paths by which cells turn food into energy, growth and waste.

Other modellers are mathematically reconstructing biochemical networks from first-principles, subjecting them to required mass, electrical and thermodynamical constraints, and then predicting optimal solutions within the remaining (physically viable) ones. For example, a research group at the University of California at San Diego has predicted that Escherichia coli is optimized for growth, not energy production.

The observation that many biochemical problems most likely have an optimal answer has led some modellers to predict a near future with quantitative models of cell function, organ function and eventually whole-animal function, perfect drug discovery engines.

Mathematics is just a method, and one of its main characteristics is abstractness. Abstraction's perhaps the single most important word for appreciating mathematical work, have it some natural phenomena as an amorphous and rapidly abandoned spawn or not even that. Grammatically, the root form is the adjectival, from the Latin abstractus = "drawn away". Some major (english) dictionary definitions of interest(1):

- "withdrawn/separated from matter/material embodiment/practice/particular examples;"

- "ideal, distilled to its essence."

Then, for our purposes, abstract's basic adopted definition will be twofold: a) (adjective) removed from or transcending concrete particularity, sensuous experience; (b) (verb) abstract something = reducing it (e.g., a natural/fabricated phenomenum - industrial, biological, sociological, metheorological, etc., in nature, at every conceivable level) to its absolute skeletal essence(2), frequently as the only practical way to produce some minimum understanding of an extremelly complex phenomenum pretty sure (in)directly affected by a huge number of influences at various different degrees.

Mathematicians and nowadays also a significant proportion of biologists, sociologists, economists et al., as well as, you know, philosophers, spend a lot of time thinking abstractly, or about abstractions, or both. Abstraction proceeds in levels. For instance, let's say man meaning some particular man is Level One, Man meaning the species is Level Two, something like humanity/humanness is Level Three, and so forth. Quoting Bertrand Russel: "That fact is, that in Algebra the mind is first taught to consider general truths no asserted to hold only of this or that particular thing, but of any one of a whole group of things. It's in the power of understanding/discovering such truths that the mastery of the intellect of the whole world of things actual & possible resides; and ability to deal with the general as such is one of the gifts a mathematical education should bestow."

In a more purposeful instance, consider a simple pendulum (Figure 1), an idealized (physical) system composed of an object of m units of mass and no particularly relevant shape + a perfectly rigid l-unit long stem with almost 0 units of mass (i.e., "no mass" for pratical purposes) who freely moves along a perfect, single vertical plane. Note that the general variables, or simply noums, m for mass and l for height, are only mathematical entities entertained by the mind, general concepts divorced from particular instances. As well as motion. What exactly do we mean when we talk about motion? In the particular level of abstraction embedded in our example motion corresponds to angular displacement, k, along a single (vertical) plane, of a general object of mass m, connected to the end of a fixed, perfectly rigid stem of height l. If we abandon the object from a particular (angular) position, the pendular motion starts. Consider that we have ideally assumed that this is a free motion, meaning that the (supposed) air friction was abstracted out (except the laws of gravitation). As time passes (and let's not begin, given our current more modest purposes, a discussion about the most important abstraction of all abstractions) on, the object of mass m (henceforth simply m) moves along the plane theoretically assuming all possible angular displacements, provided that we consider infinitesimally small time steps (i.e., almost 0), until a cessation determined only by its mass and the stem height. Let's call the abstract "mode" where we can monitor m's position (i.e., observe and perfectly measure and collect k) within almost 0 time steps a continuous mode, or simply continuous time. Reality dictates that we can only register those angular displacements within significant time steps determined by the technology we use to do the measurements, who actually samples the first (idealized) set of collected k's. Let's call this "mode" discrete time(3)(4).

Given all such idealized conditions we can perhaps mathematically represent this specific, simple pendular movement by an equation such as

A: m x l2 x (dk2/d2t) + m x g x l x Sin(k) = 0

where k is the pendulum angular displacement, dk2/d2t is the second (temporal) derivative of k, and g is the well-known local acceleration due to gravity or standard gravity (value = 9.80665 m/s2). A derivative is defined by the ratio of variation of one or more variables in relation to the variation of another one or more variables. In its most simple form, we have a derivative of a single variable in relation to time, or, in our particular example, the variation of a single variable, k, in relation to only one variable, time, considering the ideal case where time steps are infinitesimally small, an operation we usually symbolically represent by dk/dt, meaning the first temporal derivative of k. (Abstractly) Repeating this same (mathematical) process n times we have an n-th order temporal derivative of k.

Now note the second term in the left-hand side of equation A: Sin(k) is the sine of k, the well-known trigonometrical function - i.e., for each value of k Sin(k) returns a particular value exactly equal to the sine of that angle. Take note that usually we can also derivate a function in relation to time or in relation to one (or more) of its constituent variables.

So A is a second-order Ordinary Differential Equation (henceforth O.D.E.), "ordinary" in the sense that it contains functions of only one independent variable, and one or more of its derivatives with respect to that variable. O.D.E.s are one of the most common and simple building blocks for the mathematical representation of actual physical/biological/sociological/economical/... phenomema. In fact, there are an enormous number of these abstract representations available to the analyst. Let's call them simply mathematical models. The most high level distinction between them is the following: Models as in A are constructed based on first-principles knowledge of the system's underlying physical laws, while there are models that are pure unspecific black-boxes with several internal characteristics to be somewhat adjusted based on recorded information of system's behavior.  

Despite the very idealized conditions, there's no analytical way to solve A in terms of elementary functions(5). However, we can hopefully identify the main characteristics of those "unreachable" solutions, qualitativelly scanning the possible movements inherent to this quite idealized representation of the actual physical system.

The simple pendulum as in A is an autonomous system, since there's no explicit dependence on time. In a non-autonomous system, behavior explicitly depends on time. Think of a pendulum that's periodically forced by an external stimuli, in which case we can identify a system output - the angular displacement - and a system input - the external force. I.e., there's a direct causal relation between the external stimuli & the angular displacement. A system is called causal if the output at any time depends only on values of the input at the present time and/or in the past. Embedded in this definition's another important one, that of a dynamical system - i.e., if a system's behavior - e.g., represented by its identified output - also somewhat depends on the past values of this output and/or past values of the input, that system is not a static, memoryless one for which current inputs are all that's needed to determine the current outputs. Roughly speaking, the concept of dynamics in a system corresponds to the presence of an internal "mechanism" that retains/stores information about input and/or output values at times other than the current time. In some physical systems, memory's directly associated with the storage of some form of energy - e.g., think of a simple electrical circuit with a capacitor who stores energy by accumulating electrical charge. An automobile has memory stored in its kinetic energy. In computers, memory's typically directly associated with storage registers that retain values between clock pulses. At this point is quite natural to recognize almost all conceivable system or phenomenum as dynamic by nature. However, there's some useful cases where this memory effect can be small enough to make a simple static (mathematical) representation of the system all that's needed for the majority of meaningful purposes.

There's also an inherently nonlinear behavior embedded in the second term of the left-hand side of A, as k will depend of its own sine. A linear system possesses the property of superposition: If an input consists of the weighted sum of several signals (i.e., recorded values of some variable evolving over time), then the output is the weighted sum of the responses of the system to each of those signals. Nonlinear terms in the equations of a system can involve algebraic or more complicated functions and variables, and these terms may have a physical couterpart, such as forces of inertia that damp oscillations of a pendulum (e.g., air friction), viscosity of a fluid, or the limits of growth of a biological population, to name a few.

For the sake of simplicity, we can or must dismember the second-order O.D.E. in a set of two coupled first-order O.D.E.s by introducing a new variable v = dk/dt. Actually, the first derivative (over time) of the variable "position" is (abstractly) another well-know variable called "velocity". So now we have an (still) autonomous, 2-variable, 3-dimensions, 2-state mathematical model(6). Usually, the state of a system is a pair of variables (or a triple, whatever) who somewhat represents how the system behavior evolves over time. As a bidimensional representation, we can simply put k (position) and dk/dt (velocity) on the axes of a diagram on the plane and qualitatively sketch the phase paths, flow lines or orbits which represent A's possible solutions. Each pair (k,dk/dt) in the plane corresponds to a particular solution of A. Let's call the set of all these curves the system's phase portrait, see Figure 2. Note that point that the central point in the bottom part of the figure corresponds to A's trivial solution: k = 0 (then dk/dt = 0); physically, we put the object at k = 0 and the pendulum remains there indefinetelly, what we usually call an equilibrium point (or fixed point). There's another notable equilibrium point in Figure 2, b: k = pi, 3xpi, ... and k = -pi, -3xpi, ... and k = 0 (simply meaning that the pendulum is rotating clockwise or anti-clockwise), which physically corresponds to the object suspended above its stem, remaining there indefinetelly, obviously a pretty counter-intuitive condition.
Let's make a more detailed look at the phase portrait in Figure 2. The set of closed curves around fixed points represents all the supposedly periodical movements the simple pendulum can perform - i.e., as some time passes, the system returns to the same state. Fixed points where each curve intersects axis k, corresponds to the oscillations amplitude. The wave-like curves at the top and at the bottom represent movements where k always increase or always decrease (conventionally, it's assumed that k increases clockwise) - i.e., movements where the pendulum rotates. The two curves who actually intersect axis k spatially define two separated regions with quite different (qualitative) behaviors: Inside these two curves, the movement is periodical & bounded; outside, the movement is unbounded. As we abstracted out air friction and any other possible external influence besides gravitation, the simple pendulum keeps its total energy indefinetelly  - i.e., this hyper-idealized but yet somewhat useful (e.g., at least in an educational sense) mathematical representation of an actual physical system (i.e., a pendulum) can be called a convervative or Hamiltonian system.

Let's go now to a more interesting yet idealized system, naturally ocurring or artifically built, known as the damped harmonic oscillator, who we can mathematically idealize by the following second-order, autonomous, linear O.D.E.

B: (dx2/d2t) + y x(dx/dt) + o2 x x = 0

This oscillator is a dissipative system, and note that, according to its phase portrait shown in Figure 3, there's a single point for which all the trajectories converge provided that a sufficient long time has passed - i.e., the movement is damped and finally comes to a halt. This very important construct, only available for dissipative systems, is known as the attractor of the system, which in this particular case is only a point, because B is a linear equation, and coincides with B's trivial solution, or the system's trivial equilibrium point - i.e. (x,dx/dt) = (0,0). That is, movements approach a fixed/equilibrium point, that attracts the closest orbits. However, as we'll see in the next parts of this "essay", attractors may have any dimension, provided that this dimension is smaller than the dimension of its system's equilibrium points (also known as the number of degrees of freedom).


--
Footnotes:

(1) E.g., the O.E.D.
(2) As the abstract of an article or research paper.
(3) Because of their speed, computational power, and flexibility, modern digital processors are used to implement many practical systems, ranging from digital auto-pilots to digital audio systems. Such systems require the use of discrete time sequences representing sampled versions of continuous time variations - e.g., aircraft position and velocity for the auto-pilot, and speech & music the audio system.
(4) However, discrete time recorded data may represent a phenomenum for which the variables are inherently discrete, such as in demographic studies in which, e.g., average budget, crime rate, or pounds of fish caught, are tabulated against family size, total population, or type of fishing vessel, respectively. Also, printed pictures actually consist of a very fine grid of points, each of these representing a sample of the brightness of the corresponding point in the original image.
(5) A solution for A comprises all values of k accross the time. Surely, if we arbitrate specific values for m and l, there are a number of computational methods we can use to simulate the pendulum behavior.

Alternative medical treatment

"They say it's un-Canadian and you must leave the hospital if you want to do it" - Tom Louie
"Is it, in effect, a human poop transplant?" - jwz on Livejournal


Poop transplant? Actually, yes, that's exactly what it is - a medical procedure which involves transferring the bacteria from a healthy patient into one, um, less healthy. Also known as faecal bacteriotherapy or human probiotic infusion, it's used in some cases of pseudomembranous colitis, ulcerative colitis and irritable bowel syndrome. The aim is simply to restore the normal balance of microorganisms in the bowel, and very effective it is, having been used for years to treat people affected by that colitis caused by the bacterium Clostridium difficile.

According to wikipedia, the treatment is delivered, logically enough, by enema, although it can also be delivered via a nasal tube into the stomach. The donor is, naturally, screened for other unpleasant parasites such as bacteria, amoebae and other bowel flora and fauna. The stool poop faecal matter donation is subjected to a wide range of tests and treatments to render it suitable for, um, injection. A series of treatments is necessary.

Grandmother saved by daughter's poo

Does it work? Yes, according to Ethel McEwan. At 83 years of age, she contracted a potentially fatal case of C. difficle, and was only saved, in the words of the Daily Telegraph newspaper, "after a hospital fed her daughter’s faeces to her". In this instance, Ethel was transfused nasally, with what the newspaper described as a liquidised sample of faeces. Each of the news reports I have read suggested that the delivery system was ingestion rather than colonic injection, which allows whatever effective agent to work on the whole of the digestive system, from stomach through small intestine to colon.

Various studies have been undertaken, in hospitals from Australia through Scotland to the US. One hospital in Glasgow conducted a trial on twelve patients infected with C. difficle, after antibiotics had failed to control repeated infection. Nine of the twelve reported no further incidence, two had further infections which did respond to standard antibiotics, and one was initially cured but later reinfected. Many people both outside and inside the medical profession believe that antibiotics weaken the general healthy balance of organisms in the bowel. This enables the superbug to do its worst, and that only by attempting to redress the balance can the condition be controlled, in this case by faecal infusions.

The Centre for Digestive Diseases in NorthWest Australia describes the treatment (however administered) as a "treatment of last resort", but they should know - they've been doing it a long time, and they claim a 90% success rate.

So, I ask myself, would I have such a poo transplant? Hell yes, if I had to. Just as long as it's not still warm.



And yes, it seems odd to talk of "bowel flora". What would it smell like, Odour Colon?
Oh, and someone pointed out that there's a supposed sexual practice known as "chunneling" which involves a direct transplant via a tube. It's at http://www.urbandictionary.com/define.php?term=chunnel if you want to try.

Rai Tai says re faecal transplant: This general thing is done in cattle very often, except we call it transfaunation, and it's movement of healthy "rumen juice" from one bovine to another. That more than anything can help save a cow's life, because they will starve if they don't have healthy flora/fauna. At the vet school, we have fistulated donor cattle to help save lives, the bacterial way!

http://en.wikipedia.org/wiki/Fecal_bacteriotherapy
http://www.telegraph.co.uk/news/uknews/1570515/Grandmother-saved-by-daughter%27s-poo.html
http://www.ocnus.net/artman2/publish/Research_11/Cure_for_Killer_Bug_-_but_There_s_a_Catch.shtml
A cartoon, just for laughs