This is Everything2's science writing group, existing to encourage, facilitate and organise the writing and discussion of science on this site. Members are usually willing to give feedback on any writing on scientific topics, answer questions and correct mistakes.

The E2_Science joint user is here make it easier to collectively edit and maintain the various indexes of science topics: Scientists and all its sub-indexes, physics, astronomy, biochemistry and protein. More may follow. It also collects various useful links on its home node.

Note that there is also a separate e^2 usergroup for the discussion of specifically mathematical topics.


Venerable members of this group:

Oolong@+, CapnTrippy, enth, Professor Pi, RainDropUp, Razhumikin, Anark, The Alchemist, tom f, charlie_b, ariels, esapersona, Siobhan, Tiefling, rdude, liveforever, Catchpole, Blush Response, Serjeant's Muse, pimephalis, BaronWR, abiessu, melknia, IWhoSawTheFace, 10998521, sloebertje, getha, siren, pjd, dgrnx, flyingroc, althorrat, elem_125, DoctorX, RPGeek, redbaker, unperson, Iguanaonastick, Taliesin's Muse, Zarkonnen, SharQ, Calast, idan, heppigirl, The Lush, ncc05, Lifix, Akchizar, Palpz, Two Sheds, Gorgonzola, SciPhi, SyntaxVorlon, Redalien, Berek, fallensparks, GunpowderGreen, dichotomyboi, sehrgut, cordyceps, maverickmath, eien_meru, museman, cpt_ahab, mcd, Pandeism Fish, corvus, decoy hunches, Stuart$+, raincomplex, Tem42@
This group of 71 members is led by Oolong@+

Peak-to-Peak Dynamics (PPD) is a particular form of deterministic chaos, where, from a n-th order continuous-time dynamical system, the amplitude and time of occurrence of the next peak of its output variable can be predicted from information concerning at most two previous peaks.

Simple PPD have been discovered by famous metheorologist Edward Lorenz in his pioneering 1963 paper on chaos.. Later, PPD have been noticed by various researchers in some fields, like ecology, bio & electrochemistry, physics and electronics.

E.g., remarkably irregular peaks characterize the dynamics of many plant and animal populations. As such peaks are often associated with undesirable consequences (e.g., pest outbreaks, epidemics, forest fires), the forecast of the intensity of a forthcoming peak is a problem of major concern, and has been shown to be most often predictable from the previous peaks, through the analysis of both, simulations of related mathematical models and of some of the longest and most celebrated ecological time series.

Basics

After a transient, any deterministic, dissipative, nonlinear dynamical system settles on an attractor (an equilibrium, a limit cycle, a torus or, most famously, a strange attractor) and remains there forever if it's not perturbed. Some insights about the attractor can be obtained, even in the absence of a formal mathematical model, if a single variable y of time t has been recorded for a "sufficiently long" period, provided that the system was on the attractor. In particular, you can extract from the record all the peaks (i.e., extreme occurrences) of the variable and plot them one peak against the previous one, sequentially, thus obtaining a set of points called Peak-to-Peak Map (PPM), or sometimes also called next-amplitude map, next-maximum map, or, originally, Lorenz map.

If the dynamical regime is periodic (i.e., the system's attractor is a cycle) and there are k peaks per period, the PPM's composed simply of k distinct points. By contrast, if the regime is quasi-periodic (i.e., the system's attractor is a torus) or chaotic (i.e., the system's attractor is a strange attractor), the points on the PPM are all distinct and sometimes display filiform geometries - the points lie on a closed regular curve when there's quasi-periodicity (corresponding to a slice of a torus), roughly on one or more curves1, when there's low dimensional chaos, and form a cloud-like set when there's high dimensional chaos.

When the PPM is filiform - i.e., when the system's attractor is low-dimensional -, the intensity of the forthcoming peak and its time of occurrence can be predicted with remarkable accuracy from the current peak, when we say the PPD are simple, or from the current + the previous peaks, when we say the PPD are complex (in other words, the forecast of the next peak based solely on the current peak would be ambiguous).

When the set of extracted extreme occurrences is filiform, it can be approximated by a set of curves called Peak-to-Peak Skeleton (PPS), described by a k-value function that can be interpreted as follows: Given a certain peak, the next one's approximately one of the k elements of the function set. When k=1 the PPS is called simple, as well as the underlying PPD. In case the PPD are complex, the extra information needed to accomplish the task (i.e., to forecast the next peak) is a "surrogate" information of the previous peak (i.e., not the previous peak itself), namely knowing if the previous peak was "small" or "large". E.g, in a specific study on plankton-fish interactions it has been shown that the next peak of "young of the year" planktivorous fish, that systematically occurs every year during the summer, can be forecasted on the basis of the current peak + the month during which the previous peak occurred (i.e., the exact date of the occurrence of the previous peak's not needed).

Practical Uses

Once PPD have been identified, the underlying dynamical system (often attached to a model composed by n differential equations) can be described by a very simple reduced order model involving only the peaks of the variable of concern or the corresponding occurrence times.

Identification of attractors

The identification of the attractor of a dynamical system from the observations of a single variable is a problem of major concern, which is usually solved through relatively complex techniques. By contrast, peak-to-peak analysis (i.e., the determination of the PPM associated with a recorded time series) is an almost trivial task that can be even performed by hand. This is also an effective tool for discovering if the dynamics within the attractor can be described by a one-dimensional map. I.e., before performing any analysis of a recorded time series, it's worth extracting its PPM to check if it's really justified to proceed further with more sophisticated (i.e., costly) methods.

Next peak forecast

Although, in theory, forecasting the output peak and the time of occurrence of this peak use equivalent schemes, in practice there's no real equivalence, because the peak amplitude can be known for higher or lower precision than the time of occurrence. E.g., in many ecosystems the time of occurrence of recurrent extreme episodes is well known, while the severity of each episode can be hardly evaluated.

Controlling PPD

In many real-world problems, the peaks of the output variable are associated with high costs, so that it's natural to refer to a reduced order model and formulate an optimal control problem involving only the peaks, their times of occurrences, and the control efforts. Actually, when the peaks are the most crucial episodes, the only available information on the past history of the system are often the times of occurrence of the last peak, fortunately exactly what's needed to forecast the next peak or to exert the control action suggested by the reduced order model.

Challenges

The problem that certainly deserves more attention is the identifiability of PPD from the all-around, real-world, "noisy" field/experimental data, since the structure of the PPM is sensitive to high frequency measurement errors. Of course, this can be somewhat compensated by "filtering" the data before extracting the peaks. General and effective counter-measures to this problem seems to be hard to discover, but the challenge's worth trying. Alternatively, the study of particular cases has been illuminating and suggested how to circumvent particular obstacles. Meanwhile, it seems appropriate to make a combined use of data and models, where a naïve way of proceeding is to use the supporting model only for a qualitative consistency check and/or for a calibration of the simulated PPM with the data. This is certainly justified if the aim is to build an operational forecasting technique based only on information on past peaks.

1 These PPMs would be thinner if the measurement error, inevitably present during experiments/recordings, would have been smaller.

There's a persistent quasiscientific myth that glass is actually a supercooled, superviscous liquid or fluid. Tour guides in historic churches and other old buildings will often point out that the panes in the windows are thicker at the bottom than the top, "because they've flowed ever-so-slowly downwards over the years." But it turns out this isn't quite true.

Glass is actually an "amorphous solid". It has no crystalline structure, and it does not undergo a normal phase change as most materials (such as water, and metals) do. (Another amorphous solid is rubber.) Nevertheless, the molecules in room-temperature glass are rigidly bound to each other, and there's no evidence that they flow, even after hundreds or thousands of years.

So what about those old windowpanes that are thicker at the bottom? Well, until recently, there wasn't a good way of making perfectly flat, smooth glass for windows. For example, the "crown glass" technique (used since medieval times) involved blowing, flattening, and then spinning what became a large disc of glass, slightly thicker at the outer edge due to centrifugal force. These irregularities obviously persisted even after you cut smaller square panes out of the large discs. Windowmakers tended to mount panes with the thicker edge down -- they looked better that way -- but what this means is that the bottoms had always been thicker; they didn't sag over the years. And sometimes you can find old panes that are thicker at one side, or at the top, because they happened to be installed that way, and this is a pretty good proof that the thick-at-the-bottom panes aren't that way due to "flow".

Sources:
http://math.ucr.edu/home/baez/physics/General/Glass/glass.html
http://dwb.unl.edu/Teacher/NSF/C01/C01Links/www.ualberta.ca/~bderksen/windowpane.html

(P.S. There is at least one example of a superviscous "liquid" that takes years to flow -- pitch. If you do a web search for "pitch drop experiment" you can find pictures of a funnel full of pitch at the University of Queensland that has dribbled out just eight drops in over seventy years.)

A considerable ammount of the world's (scarce) scientific research funds has been allocated to the search for meaningfull chaotic patterns in many fields, from hard sciences (e.g., physical sciences, engineering) to socioeconomic studies, with a wide range of promising and practical results. It appears that some of the tools of the science of nonlinear dynamics (and chaos) are also well-suited for studies of biological phenomena, neuroscience included. Indeed, such complex systems can give rise to collective behaviors which are not simply the sum of their components and involve huge conglomerations of related units constantly interacting with the environment. There's somewhat of a consensus that the activities undergone by neurons, neuronal assemblies and entire behavioral patterns (e.g., after epileptic seizures), the linkage between them, and their evolution over time, cannot be understood in all its complexity and practical potential without these nonlinear techniques.

As an example, take the now classic Hindmarsh & Rose mathematical model of neuronal bursting using 3 coupled first-order differential equations. A computer-simulated train of action potentials results in a pattern that would be interpreted as random on the basis of classical statistical methods, while a representation of interspike intervals reveals a well-ordered underlying generating mechanism (i.e., peak-to-peak dynamics). Rather naturally, the identification of nonlinear dynamics and chaos in an experimental neuronal setup is a very difficult task at various levels, far from the "clean" low dimensional chaos produced by computer/mathematical models. Firstly, there's lack of stationarity on the recorded signals, meaning that all the "parameters" of the (biological) system rarely remain with a constant mean and variance during measurements. This creates a not always viable need for prolonged and stable periods of observation. Secondly, collected observations generally exhibit a complex mixture of fluctuations beyond the system itself, including those by the environment and those by the measurement equipment. For these purposes it's helpful to start investigations by constructing a phase space description of the underlying phenomenon (i.e., phase space reconstruction and embedding of a time series), usually plotting the relationship between successive events or time intervals (i.e., a Poincaré map), as most of the relevant signals are discrete ones.

And so what? Is that just public/media curiosity? In the light of the aforementioned technical difficulties neurobiologists have become gradually more interested in practical issues such as the comparison of dynamics of neuronal assemblies in various experimental conditions. With these less ambitious expectations, average (nonlinear) forecastings (e.g., of epileptic seizures) has been achieved in spike trains (demonstrating determinism as a byproduct). Alternativelly, the search for Unstable Periodic Orbits (UPOs) in the reconstructed phase spaces has been fruitful, which (paradoxically) results in an advantage if you desire to control a neuronal system to explore a large region of its phase space using only a weak control signal. Recipe: Apply a (weak) control signal to force the system to follow closely any one of the identified UPOs, obtaining large changes in the long-term behavior with minimal effort - i.e., you can select a given behavior from an "infinite" set and, if necessary, switch between them. Potential is unequivocal: Some abnormalities of neuronal systems, ranging from differing periodicities to irregular "noise-like" phenomena could define a group of "dynamical deseases" of the brain.

Think of the 50x10^6+ epileptic people worldwide, ~20% of them not sufficiently helped by medications, taking the surgical removal of the seizure-focus as the last resort. Implants who (electrically) stimulate the vagal nerve has also been used, but their action mechanism is uncertain, they have several side effects, and they could potentially kindle new epileptic foci in the area. Chaos control techniques might be used, with the advantage of requiring relatively infrequent stimulation of the tissue.

In a mathematical space whose coordinates represent the state of a dynamical system (i.e., a state space), periodic orbits are the set of equilibrium states. If all of the periodic orbits in this abstract dynamical landscape are unstable, the system's temporal evolution will never settle down to any one of them. Instead, system's behavior wanders incessantly in a sequence of close approaches to these orbits. The more unstable an orbit, the less time the system spends near it. Unstable Periodic Orbits (UPOs) form the "skeleton" of nonlinear dynamics, and one can build a model of a system by counting and characterizing its UPOs in a hierarchy of orbits with increasing periodicity. Model's accuracy can be improved by progressivelly adding longer-period orbits to the hierarchy. The dynamical landscape can then be tesselated into regions of the state space centered around these UPOs. Orbit locations and stability can also offer short-term predictions for the system's future states. This type of predictive model can be used for parametric control of nonlinear systems, whether they are chaotic or not. However, rigorous identification of UPOs from noisy experimental data is a difficult task.

There's a straightforward method for identification of UPOs which relies on the recurrence of patterns in state space, though that's a very rare event (i.e., a state repeatedly returning near an orbit) in the reality of short and nonstationary datasets. There's another method based on a local dynamics data transformation, which acts as a dynamical lens so that the new datasets are concentrated about distinct UPOs, helping to offset the usual scarcity of trajectories near UPOs. With the additional ability to identify complex higher period orbits by using only fragments of trajectories near those orbits, identification of UPOs was successfully achieved in various experimental settings, including epileptiform activities from the human cortex.

Tracking "parameter" changes from the inherently nonstationary data of, e.g., neurological systems, with UPOs has also been accomplished, as this is a strong requirement for UPO-based control of nonlinear systems. Furthermore, this tracking could be used to detect changes in system state due to intrinsic "parameter" variations (e.g., the transition to epileptic seizures), extrinsic effects (e.g., due to electromagnetic fields), or even perceptual discrimination.