Got math?

This is the E2 usergroup e2, which was originally a proper subset of the e2science group (e^2e2science). At first, the group name was e π i + 1 = 0, but some whiners thought that would be too hard to send a message:

/msg <var>e</var><sup>_&pi;_<var>i</var></sup>_+_1_=_0 After typing that, I forgot what I was going to say.

So here we are instead with a simpler (but more boring) name e2theipiplus1equalszero. Update: more complainers. Now we're just e^2. (Now does that means e² or e XOR 2 ? That is my secret.) Tough luck for those without a caret key.

e^2 often erupts into long mathematical discussions, giving members more /msgs than they care to digest. So, you have a few other options if the math is going to get seriously hairy:

  • Send to only those members of the group currently online:

    /msg? e^2 Brouwer was a Communist!.
     
  • Speak in the chatterbox. But be prepared to give non-math noders headaches.
  • Add the room Mathspeak to the list of rooms you monitor in squawkbox or Gab Central. Mathspeak died of loneliness.

You may want to read some of these while you are calculating ln π.


Venerable members of this group:

Wntrmute, cjeris, s19aw, Brontosaurus, TanisNikana, abiessu, Siobhan, nol, flyingroc, krimson, Iguanaonastick, Eclectic Scion, haggai, redbaker, wazroth, small, Karl von Johnson, Eidolos, Ryouga, SlackinWhileSleepin, ariels, quantumlemur, futilelord, Leucosia, RPGeek, Anark, ceylonbreakfast, fledy, Oolong@+, DutchDemon, jrn, allispaul, greth, chomps, JavaBean, waverider37, IWhoSawTheFace, DTal, not_crazy_yet, Singing Raven, pandorica, Gorgonzola, memplex, tubular, Tom Rook
This group of 45 members is led by Wntrmute

Before you get yourself too exercised here, be aware that there are two vastly different mathematical concepts that could take this moniker when given the status of a basic assumption of a logical system.

In the late 1800's, Cantor's theory of transfinite sets ignited fierce controversy. The mathematicians supporting him tended to be younger and more flexible thinking, and those opposed tended towards being of the old guard. As a result, Cantor found it impossible to get a job more prestigious than a low-paying lecturer at the Univerity of Halle. This frustrated Cantor to no end, and he became obsessed with proving one concept that he thought would vindicate his ideas, that the (infinite) cardinality of the real numbers had the next higher (infinite) cardinality from the integers.

This became known as the Continuum Hypothesis. As mathematicians are wont to do, the concept was eventually abandoned in favor of a broader, more general concept, that the cardinality of any transfinite set's power set was the next higher cardinality from the set itself.

It's easy to show that the power set is bigger, not easy to show that there's nothing in between or off to the side.

Well, actually, it's impossible. Mercifully, perhaps, Cantor did not live until 1963, when Paul Cohen showed that the Generalized Continuum Hypothesis was independent of the "lesser" axioms of set theory. This means that the GCH can be "true" if you want it to be true. It can also be "false" if you want it to be.

When such a situation arises, it's often useful to assume one or the other and see what the consequences are. Unless you're an intuitionist and consider the whole idea as so much wasted breath. For the rest of us, the GCH is one of the concepts that we might apotheosize into an axiom.


But there's a much more restricted idea, which is to simply assert that

the real numbers exist

as a set, or equivalently, that all of the sets of natural numbers can be collected into a set (the "power set" of the natural numbers). But the axiom only lets you do this with the natural numbers; not any other sets.

By necessity, this set has a larger (infinite) cardinality than the natural numbers, but this has nothing to do with the Continuum Hypothesis; we don't care if there larger cardinals, ones in between, or even any off to the side.

This is an optional axiom introduced by Paul Bernays in his expansion of Gödel-Von Neumann set theory. He labels it "AC", which is unfortunately easy to confuse with the axiom of choice. So we'll call it "ACo".

GCH and ACo have different ideas at their base, and interoperate with the other "exotic" axioms in different ways:

  • GCH requires the axiom of infinity to be true and implies ACo, the potency axiom, and the axiom of choice.
  • ACo implies the axiom of infinity. Although it's independent from the axiom of choice and the potency axiom, the Axiom of Choice is required for the full spectrum of analysis, and with the Axiom of Choice it implies the potency axiom and hence GCH.

The biggest consequence of ACo is the necessary existence of non-computable real numbers. With the axiom of infinity but not ACo, you are left with the rational numbers and computable irrational numbers. But you have lots of problems working with the convergence of arbitrary sets of real numbers. Which suits the intuitionists just fine.

I've always hated statistics. Nothing smells more like accountancy, rimmed glasses, bookkeeping, and horrible little news reports than statistics. A career in statistics was the image of compiling endless financial reports to a stony board of directors in an attempt to squeeze out those few more dollars from the public. It was the lowest. It was the selling of an mathematical mind to the machine and the end of all beauty and expanse. There was no doubt in my mind that statistics was simply evil.

So it was a mysterious change when it happened, and it all began with a search engines module at University. This was easily one of the best courses I took in my time at University and from the beginning of the course what became most clear was that making an effective search engine had nothing to do with understanding the English language, with extracting semantic meaning from queries or documents, with logic, reason, or human experience. It was all to do with raw, unadulterated statistics.

And suddenly I saw the glint of gold. I saw a promise in statistics. Hiding beneath dusty logarithm lookup tables and hypothesis testing was the promise of an Oracle Machine. Something that could be queried and provide answers in milliseconds. This was knowledge like had never been seen before and yet it was nothing to do with knowledge, logic, semantics, or meaning. It was just numbers, just data and statistics and a query box. Ultimately the question in my mind was "how can this be?", and secondly, against my better judgement, "how can I get it?".

An internet search engine relies on the systematic de-construction and processing of text. The text is crippled; stripped of meaning until it is completely void and will fit into nice neat data structures for processing. Only then would the data shine through. And once the numbers were ready, the statistical algorithms could roll along and process the data. Finally the questions we all had could be answered in the blink of an eye. Building a search engine is not re-inventing the wheel, it is rediscovering the holy grail.

_______________________________________


The first thing to go is syntax. The hierarchy of language, which structures and subjugates words into a towering tree, is unimportant under statistics. All web pages, documents, and queries are reformed and stored as jumbled lists of words. Context is not truly lost. Those words which often are together are still in association via their combined presence in a list. Everything is just a little more anarchic. The words have been freed of their sentences. There is no longer a primary verb, or a root pronoun. Under the statistical system all words are equal, and as you would expect, some are more equal than others.

The important words are those which do not occur often. "The" is largely a useless citizen; syntactic glue. No room is left in our system for such common words and where possible they are removed. The "aardvarks" and "armamentaria" are king, because you can be sure if they exist in a query then they must be key. So how are these statuses assigned? Not by some governing hand. We look toward the Laws of Text, Zipf's law and Heaps' law. These laws tell you, in beautiful fairness and balance, the relative importance of words in a language. Even the numbers and numerals can be governed using Benford's law. Nothing is left to chance, all is mathematical.

But all this begs the question. Do we really need words in the first place? Is this bureaucracy? Can something smaller suffice - say, a symbol, a letter? In languages such as Japanese, with no spaces to separate words, we can simply assume that each overlapping pair of symbols, as well as each individual symbol, is a word in its own right. As we accumulate more and more web pages and documents, the pairs which are actually words will continue to appear, while those which are not words will not. It soon becomes clear what is, and what isn't a word. This system, of taking N-grams, sets of symbols is effective. Even more effective than just splitting via words. Even in European languages. The reason is we can match policeman with policemen, even with no idea of the semantic relationship. Words are not required. A good search engine can use just symbols.

We note that in using N-Grams, the more documents you have the better. This is another devilish aspect of statistics. In statistics, more is more. You can never have too much data. The reason is simple. Signal adds up and noise cancels out. More precisely, when you have more data, the probability of something becoming statistically significant via chance is lessened, while the probability of something becoming statistically significant via actuality, is increased. In a logical formula, the smaller the formula the better. But in our search engine, the more websites scanned the better - even if what they contain is largely junk.

So now our documents are simply jumbled lists of words and their relative importance. We have spiders crawling the web and accumulating more data for us, and we have an index slowly ticking over and processing the document data. All that remains now is to design the statistical models via which we rate our documents for a given query. Because of our destruction of the text we can build effective data structures and feed them into a huge database. The final step is just to turn it on.

A human-free system. A system of knowledge automated by the cold clicking hands of a computer.

_______________________________________


The secret in statistics is rather simple. The power it provides is a concise mantra. In logic, deduction and mathematical proof one can divulge true answer to precise questions. Statistics, on the other hand, can provide compelling answers to all questions. Statistics focuses on the question rather than the answer. As Douglas Adams revealed, if you wish to know the answer to the meaning of life the universe and everything, you must first know the question.

The difference can be shown with a trick. When queried with the question...

"How many legs does the average person have?"

  • A logical system will answer ~1.99
  • A statistical system will answer 2

A good logical system will know the answer to a question.

A good statistical system will know what question you are asking.

_______________________________________


This is both the beauty and the danger of statistics. Much like a search engine, the goal of a statistical system is to tell you exactly what you want to hear. A statistical system does not answer a question with the precision and truth of a logical system, but it should capture the absolute intuition of what you are asking from it. It will know when "average" is translated to "mean" and when one really intends for the "mode". When a CEO asks his statistician "is the company doing good?", a good statistician will formalise and calculate the exact notion the CEO holds of "company doing good", and present it to the CEO.

The danger comes when the intuitive notion of "company doing good" differs from person to person. Perhaps the CEO is unconcerned with the variable counting toxic waste dumped, while a citizen rates this variable highly on their intuition of that evaluation. The power of statistics comes from its subjectiveness and lack of true meaning, but it is also its heel.

What really is the "mean" or the "standard deviation" other than the formalisation of some human intuition? In exact terms the mean is not "the average" because as we discussed above, that is a subjective and relative notion. The mean is only itself - that is the sum of all data points divided by the count of data points. The same is true for search engines. If I search for "The Best Page In The Universe" Google does not return the best page in the universe. It returns the tf.idf weighted sum of my query terms against its index including user ranked weights, individual behaviour weights and pagerank.

Statistics is not boring. Far from it. At its heart it is the beautiful and twisted cousin of logic and reason. It is deceptively powerful. It gives you the chance to throw your pennies in the well and get an answer back. Most of all, statistics is agnostic, subjective and human. Unlike the godlike sentience of logic and reason, statistics is the devil inside. For that reason I love it.

The famous hobbyist mathematician Pierre de Fermat had an amusing habit back in his time. He'd make these private breakthroughs about number theory or physics or geometry. Then he'd invent a problem that could be solved with his discovery and tease intelligent friends of his with the challenges. Through this challenge, he engaged in enjoyable communication with researchers while providing edification to his friends/victims. Project Euler (http://projecteuler.net/) is a modern-day Fermat, and to a lover of puzzles it is a highly addictive substance.

Project Euler is composed of a small collection of problems, all of which require some math to solve, and most of which are best-solved with the aid of a programming language. Some patient and skilled solvers use pencil and paper exclusively. Others even use Excel, which some computer scientists might sneer at. In the tradition of the hobbyist mathematician, little previous knowledge is needed to work through the problems. They start out simple and build on each other. Further, each correct answer unlocks a forum thread which provides greater insight from other solvers.

Eponymous

The site's name came from Colin Hughes who used the handle euler and started the project as a sub-section of a forum back in 2001. Now, many years past then, the project has its own site, and several people work together to make new and interesting problems, with one posted about every two weeks.

Exponential Growth

If you have no experience programming before Project Euler, you will quickly become familiar with the concept of exponential growth. Almost all problems give a test case in their description which can lead one into a false sense of early success. For instance, if you are asked to find all numbers under a million with a certain property, the test case might give the answer for all of the numbers under 100.

Many times a naive solution which will provide accurate results for the test case will run quickly, say in 10 seconds. But then, when tested for numbers through 1,000, it takes 100 seconds. And for numbers under 10,000 it takes 1,000 seconds. And then you may start wondering if it's worth over a day for you to get your (hopefully correct) answer. Often this means that you have to think about multiple ways of solving the same problem before you find a solution which is both correct and fast enough.

Ultra-Geeky tangent

Personally, I got drawn into playing with Project Euler by a friend of mine who was bemoaning a difficult problem with a recursive solution. It was running insanely slowly, and he was sure something was wrong with his algorithm, since he was using memoization to reduce the recursive calls. I thought I'd be clever and write the solution using C++ template meta-programming, since memoization is so trivial in that paradigm. Not only that, my program would be a single line of code, and the executable would run in a blink of an eye because it was printing out a constant. All of the calculation time would have already happened during compilation.

For the test case, my solution worked perfectly. But for the actual problem? gcc crashed after two minutes of compiling and told me to submit a bug report. Whoops.

I solved it in Python instead.

Esoteric Language

As with any specialized field, otherwise-rare languages are often over-represented which are particularly well suited to the problem space. For code golf, Golfscript is the king of town. For Project Euler, there's more of a senate of odd-bodies, but the best performing solvers use Frink, PARI/GP, Magma, MUMPS, Mathematica, and J. If you have never heard of most of those, don't worry, neither had I. Frink, PARI/GP, Magma, and Mathematica are all software specifically intended to aid mathematical calculations, so they aren't so surprising.

J, however, is a different beast entirely. It is a functional programming language in the same tradition as APL. And it is worth special mention for the solutions written in it.

The solution forums provide a few cool possibilities. For one, they allow people to expand upon the mathematical theory that underlies the problem, why it was chosen, and so on. Sometimes one might stumble upon these things researching a problem in order to solve it, but there's nothing like the directed insight of someone with a passion for the subject. For another, it is a place for people to post the code they used to solve the problem and to explain what it does. However, in some cases, the explanation is left off.

J programmers, in particular, seem to be fans of making punctuation-dense code which is hard to decipher. Here is an example solution to one problem by solver VrAbi:

+/?,/,/'(!10000;!1000){r*"123456789"~{x@

I would have warned you about spoilers if I had felt there was any risk a casual observer could decipher what was going on in that code.

Although I am vexed to a small degree by those who feel proud enough to display their code, but not generous enough to explain it, I do delight just a little. They are providing the same sort of teasing puzzle that I just solved, wrapped up in that very same problem. Devious.

Easy Mode

As the Project notes itself, there are now solutions out on the web for most Project Euler problems, both code and answers. Part of the intent of having private forums is to prevent someone accidentally spoiling a problem with a Google search, and that's now lost. On the other hand, it's nice that if a problem really has you stumped, you can still learn something by reading through somebody else's thought process to solve it. It does take away part of the fun of the ladderclimb knowing that the rankings are rife with people who didn't do the hard work to get there, but there are always going to people who game an automated system.

Despite the drawbacks of the mystery-destroying aspects of the Internet, searching for theory and background on a problem can result in some fairly neat discoveries. Being able to do research as easily and quickly as writing a program or a proof has fundamentally changed the nature of learning about math.

In one of the many cases of exponential difficulties, I had programmed a correct solution. It worked beautifully for the test case of a few items, but slowed down to a crawl as it progressed. It was, as I calculated, likely to finish only a little while after I perished of old age.

Since this problem was based on finding elements in a series, I took the first 20 items of the series and searched for them in a mathematical database, the On-Line Encyclopedia of Integer Sequences. This gave a result which was exactly the series I needed with references to multiple papers on related subject, but only the first 100 elements. I was already generating that many terms on my own, so it didn't get me any closer to a solution. However, I then read about these subjects on MathWorld, and I used that knowledge to vastly improve my algorithm and solve the problem. A combination of independent analysis and Internet-aided study won the day in a very pleasing evolution.

Epilogue

Although lulled in by the idea of doing a few neat math tricks to show off to my friend, I've continued to solve problems on Project Euler for a number of reasons, and I think they apply to anyone with an interest in solving math puzzles.

  • Firstly, it is a great way to learn more about a programming language. Each problem is small enough to not be daunting, but introduces some novelty which often requires using a new facility of the language. Thus far I've solved 85 problems all using Python, a language I did not know at the start of finding Project Euler, and with which I am now surprisingly comfortable.
  • Secondly, the problems themselves are intriguing, teaching things about not just mathematics but also programming tricks and optimizations in general. It is simply not possible to solve some of the problems without learning important lessons in algorithms. The problems did start out very easy for me due to some prior experience with similar challenges — I breezed through the first 49 in one day, at which point the site told me to chill out for a while. But almost all interested parties will eventually meet problems with difficulty matching or exceeding their current capabilities.
  • Thirdly, it's self-paced. You can race to solve the latest problems for prestige, casually pick out interesting looking items from the middle, or step your way through all 300+, learning along the way. You can pick up any problem at any time with only the time constraints you place on yourself.
  • Finally, there is this little bar on the site which slowly fills up as you get correct answers. Mmmmm, experience points.

If any of this sounded intriguing, I'd suggest you give Project Euler a shot. I find it's the most professionally-justifiable entertainment I engage in.

Ahh, Accretion Discs. These are astrophysical bodies which are interesting for several reasons - one reason being that they are incredibly persnickity about which equations they obey.

What are they?

As siren correctly says, they are a disc of matter falling onto a central body such as a star or a black hole. This matter was usually taken from another nearby star, and so is made up of hot plasma, and has some angular momentum relative to the fixed central star which it has to lose. So, we see the fluid heating itself up as it flows around the star, losing energy and falling inwards to lower orbits - but it has to conserve this angular momentum, which it can't do by falling inwards.

So it passes it outwards. Accretion discs (unless they hit something at the outer edge) will have most of their matter falling inwards, with an outer edge spreading away to carry off the unwanted angular momentum.

What's interesting?

Well, the most obvious reason is that these discs can release an amazing amount of energy. Every piece of matter has an energy associated with it - you've heard it before:

E = m c ²

This is a lot of energy for each little bit of mass. It's usually inaccessible, but in an accretion disc falling into a black hole, some back-of-the-envelope calculations show that you'd expect over half of this energy to be converted from matter into heat. This heat makes things hot (unsurprisingly) and so these discs are like furnaces - furnaces in which interesting physical processes take place. To understand the processes, we need to understand the furnaces first.

There's another reason, in that hard problems are inherently interesting. Occasionally the disc appears to just decide to stop being a disc and shoots jets of matter out of its sides - which is bizarre. They also have huge amounts of turbulence inside, which makes them even harder to model - along with the fact that they also have their own dynamos (magnetic fields which they generate themselves, and don't die away).

Essential details

The shape, viewed from above, of an accretion disc, will be more or less circular. But we need to consider the thickness of the disc - or, if we're not that interested in the internal structure, we need to claim that the disc is thin, so that, if the thickness of the disc is H (or 2*H depending on your convention) a function of R and θ, we have

H << R

for all R, θ. This is not the whole story, since discs don't have to be symmetric top and bottom - indeed, they usually aren't. If the angular momentum in the disc isn't aligned with the angular momentum of the central body (in other words, if they aren't spinning in the same plane) then the inner edge of the disc will become warped. This warping will then propagate outwards as what are called warp waves (like on a Slinky that you've sharply tugged to the side at one end).

Where will the disc start and end? Well, starting is easy. It'll start where-ever the mass is coming from - usually the distant sun in a binary system. What I said above about the outer edge spreading out isn't quite true, since any mass trying to spread out past the distant sun will (I believe) just get sucked back onto the distant sun via gravity. The discussion on where the innermost edge is will be discussed briefly in each section below.

Around Black Holes

When we're close to a black hole, we have to start worrying about general relativity. As I'm no general relativist, I have to take the following on trust: black holes can be characterised by 3 properties alone: Their mass (M), their dimensionless angular momentum (a), and their charge(Q). The charge is believed to be zero, in general, so we just worry about the first two. As a good approximation we can write down a classical potential which takes account of both M and a.

Φ = - G M / (r + |a|)

Here, r is the distance to the centre of the black hole, the event horizon is around 2GM (being distorted by the spinning) and the ISCO (Innermost Stable Circular Orbit) is around 3GM (in units where I've set the speed of light c = 1).

This ISCO is where the innermost edge of the disc must be (unless there's something making it be even further out). We could calculate κ, the precession frequency and see that it becomes imaginary beyond this point - particles attempting to follow a stable circular orbit find their orbit growing exponentially erratic, and quickly fall in. Note that this inner edge is far outside the event horizon.

Around Stars

When the central object is a star we see a different behaviour at the inner edge. Plasma is free to just fall into a black hole, and devil take the consequences. On the surface of a star, we must have continuity in the fluid velocity (the outer layers of the star aren't allowed to just slide over the inner layers - there has to be a thin boundary layer between them), and so we see a hot region where the disc matter is decellerated onto the surface of the star.

I say decellerated because if the star was spinning faster than the inner edge of the disc, the outer layers of the star would just break off and fly away. You can see this by thinking about little fluid parcels in the disc and then on the star. If a fluid parcel was going around faster than it would simply due to orbiting the central mass on the surface of the star, it would just fly off - and end up in the disc.

We can ask how hot this boundary layer is in relation to the rest of the disc by writing down the following dimensionless quantity:

B = (1 - Ωs / Ωd )α

Here Ω is the angular frequency of the disc and the star's outer edge respectively, and α is some dimensionless exponent (which turns out, for a Keplerian disc, to be 2). Note that, as expected, if the disc's inner edge is rotating perfectly with the star's outer edge then we won't see a boundary layer; there is no need to accelerate the fluid.

Binary Systems

If, as I stated above is usual, we have a binary system, then we see some interesting behaviour. I'll call the star which is more massive A and I'll call the other one B. In the derivations I've seen we assume A to be much more massive than B, and so we can assume that A is nearly fixed as it was above.

Just as, on Earth, the Moon rotating around the Earth causes tides, so does B orbiting around A cause tides in A. It also sets up resonance behaviour in the disc at specific radii, the values of which can be easily calculated. If B is supplying the matter to A (as in Cygnus X-1, with the blue giant supplying mass to the black hole) and the outer edge of the disc is at the radius of B, then these resonant regions start close to the center of the system and spread out in a fairly uninteresting way.

If we have the mass coming from outside of both A and B (from another source such as a third nearby star or from a nebula) then the resonances occur at radii which (as 'n', the number used to label each resonance, tends to infinity) tends to a finite radius inside the disc. At this finite radius we would have an infinite number of resonant regions within an infinitessimal distance, and plainly our model has broken down. Nevertheless, we might expect the disc to have an inner edge here due to the tidal effects of B disrupting the disc.

Physical Challenges

There is a rather extreme difficulty in the modeling of the evolution of an accretion disc. They are made up of ionised plasma - the compounds contained within have split into their constituent ions and electrons, and so the fluid can support electric currents and magnetic fields. These magnetic fields lead immediately to the magnetorotational instability in which even a tiny, tiny magnetic field is enough to make the flow become turbulent.

This instability is called a 'local' instability - it doesn't care about anything that's happening far away; only things happening nearby. We can define a 'lengthscale' - sort of the size of the region the instability cares about, and find that it is extremely small when compared with, say, the radius of the disc. Let's pluck a number out of the air and say it's a million times smaller; that's not unreasonable.

This means that if we want to model the disc as a whole on a computer, we need a very, very fine grid everywhere - the physical gaps between our data points must be small enough so that we pick up this very important magnetorotational instability. But to cover the whole disc, we'll need lots of them - a million in the radial direction! That is far, far too many; a good, publishable simulation might have 256 grid points in the radial direction, and still take an age to run. Our computers simply aren't big enough in 2009 to handle both the tiny scale of the instability and the huge scale of the disc radius.

So what can we do?

Well, we can either forget entirely about the magnetic field and simulate the entirety of somewhat unrealistic discs. This is unsatisfactory but do-able; if we're more interested in how planets form in a young solar system, and we hope that the magnetic field doesn't play too big a role, then we'd be justified in taking this route.

Most of the current focus lies on examining the scale of this instability and really trying to get a grip on it. When magnetic fields are swept around by a fluid they collide and re-connect and intermingle and absorb energy and release energy and slow the fluid down in a complicated way; there's a lot of work to be done on this.

Once we've understood the small scale (the instability), the next step will be zooming out to the large scale (the entire disc). We can't just plonk our small scale into our large scale simulations - that won't solve the problem of how large our simulation would have to be. We need to find approximations which can capture on the large scale the interesting effects of the complicated motion on the small scale without having to worry about what those motions are in each and every case.

This has been attempted, to some extent. We can say that the disc loses kinetic energy roughly proportional to the fourth power of the magnetic field, looking at the output of our simulations (known as the ad hoc 'alpha parameter'). But how to evolve this magnetic field in time without knowledge of the small scales? In what direction should it point? There remain many unanswered questions.

Technical Discussion

So, we have a system which will always be turbulent; observations show us that there is some kind of fluid dynamo occurring; the magnetic field grows and shrinks with time, and lasts far longer than it could do if there wasn't some kind of fluid phenomenon replenishing it - Ohmic dissipation would kill it off eventually otherwise. My comment about about 'in what direction should the magnetic field point' was not a throwaway comment; if we expect this turbulence to create an effective magnetic and viscous diffusivity which enables the disc to accrete, then we expect that these diffusivities might be anistropic. After all, we have two distinguished directions; the local direction of the magnetic field, and the large-scale gravity gradient which will point mostly towards the central object.

So we have a time-dependent magnetic field which is creating (slow-timescale) time dependent turbulence, which is creating some kind of time dependent effective anisotropic viscosity and diffusivity. Currently, we don't understand the magnetic field; we don't understand turbulence; we don't understand what forms these effective diffusivities should take.

There are a lot of unanswered questions.

Transitivity is a property of relations. Unlike a lot of other mathematical language, it's very intuitive and easy to grasp, so I don't feel bad for writing it in hard math form.

A relation between two sets, A and B, is a set of ordered pairs whose first elements are in A and whose second elements are in B. So the equality relation on the natural numbers looks like {(0, 0), (1, 1), (2, 2), ...}. Or I could form a relation of the Presidents and their years of inauguration like so: {(George Washington, 1789), (John Adams, 1798), ...}.

Okay, that's pretty basic. Well, a relation from a set to itself has the transitive property if, whenever aRb and bRc (that means (a,b) and (b,c) are in the relation), we also know aRc. We want this property in our very nicest classes of relations: equivalence relations, which must also be reflexive and symmetric, and orders, which must also be antisymmetric. (Weak orders are reflexive, strict orders are irreflexive.)

A lot of the real life relations we use are transitive. "Weighs more than" or "is taller than" are good examples, and note that we can use these to order people. Douglas Hofstadter had an example in which the relation was "wrote a book, one of whose characters was." On the equivalence relation side, how about "is in the same family as"? On the other hand, a lot of the problems with running tournaments come from attempting to rank people (or teams) with the relation "could beat," which decidedly isn't transitive.