This is the E2 usergroup e2, which was originally a proper subset of the e2science group (e^2 ⊂ e2science). At first, the group name was e π i + 1 = 0, but some whiners thought that would be too hard to send a message:
/msg <var>e</var><sup>_π_<var>i</var></sup>_+_1_=_0 After typing that, I forgot what I was going to say.
So here we are instead with a simpler (but more boring) name
e2theipiplus1equalszero. Update: more complainers. Now we're just
e^2. (Now does that means e² or
e XOR 2 ? That is my secret.) Tough luck for those without a caret key.
e^2 often erupts into long mathematical discussions, giving members more /msgs than they care to digest. So, you have a few other options if the math is going to get seriously hairy:
- Send to only those members of the group currently online:
/msg? e^2 Brouwer was a Communist!.
- Speak in the chatterbox. But be prepared to give non-math noders headaches.
Add the room Mathspeak to the list of rooms you monitor in squawkbox or Gab Central. Mathspeak died of loneliness.
How about logging in?
You may want to read some of these while you are calculating ln π.
Before you get yourself too exercised here, be aware that there are two vastly different mathematical concepts that could take this moniker when given the status of a basic assumption of a logical system.
In the late 1800's, Cantor's theory of transfinite sets ignited fierce controversy. The mathematicians supporting him tended to be younger and more flexible thinking, and those opposed tended towards being of the old guard. As a result, Cantor found it impossible to get a job more prestigious than a low-paying lecturer at the Univerity of Halle. This frustrated Cantor to no end, and he became obsessed with proving one concept that he thought would vindicate his ideas, that the (infinite) cardinality of the real numbers had the next higher (infinite) cardinality from the integers.
This became known as the Continuum Hypothesis. As mathematicians are wont to do, the concept was eventually abandoned in favor of a broader, more general concept, that the cardinality of any transfinite set's power set was the next higher cardinality from the set itself.
It's easy to show that the power set is bigger, not easy to show that there's nothing in between or off to the side.
Well, actually, it's impossible. Mercifully, perhaps, Cantor did not live until 1963, when Paul Cohen showed that the Generalized Continuum Hypothesis was independent of the "lesser" axioms of set theory. This means that the GCH can be "true" if you want it to be true. It can also be "false" if you want it to be.
When such a situation arises, it's often useful to assume one or the other and see what the consequences are. Unless you're an intuitionist and consider the whole idea as so much wasted breath. For the rest of us, the GCH is one of the concepts that we might apotheosize into an axiom.
But there's a much more restricted idea, which is to simply assert that
the real numbers exist
as a set, or equivalently, that all of the sets of natural numbers can be collected into a set (the "power set" of the natural numbers). But the axiom only lets you do this with the natural numbers; not any other sets.
By necessity, this set has a larger (infinite) cardinality than the natural numbers, but this has nothing to do with the Continuum Hypothesis; we don't care if there larger cardinals, ones in between, or even any off to the side.
This is an optional axiom introduced by Paul Bernays in his expansion of Gödel-Von Neumann set theory. He labels it "AC", which is unfortunately easy to confuse with the axiom of choice. So we'll call it "ACo".
GCH and ACo have different ideas at their base, and interoperate with the other "exotic" axioms in different ways:
- GCH requires the axiom of infinity to be true and implies ACo, the potency axiom, and the axiom of choice.
- ACo implies the axiom of infinity. Although it's independent from the axiom of choice and the potency axiom, the Axiom of Choice is required for the full spectrum of analysis, and with the Axiom of Choice it implies the potency axiom and hence GCH.
The biggest consequence of ACo is the necessary existence of non-computable real numbers. With the axiom of infinity but not ACo, you are left with the rational numbers and computable irrational numbers. But you have lots of problems working with the convergence of arbitrary sets of real numbers. Which suits the intuitionists just fine.
I've always hated statistics. Nothing smells more like accountancy, rimmed glasses, bookkeeping, and horrible little news reports than statistics. A career in statistics was the image of compiling endless financial reports to a stony board of directors in an attempt to squeeze out those few more dollars from the public. It was the lowest. It was the selling of an mathematical mind to the machine and the end of all beauty and expanse. There was no doubt in my mind that statistics was simply evil.
So it was a mysterious change when it happened, and it all began with a search engines module at University. This was easily one of the best courses I took in my time at University and from the beginning of the course what became most clear was that making an effective search engine had nothing to do with understanding the English language, with extracting semantic meaning from queries or documents, with logic, reason, or human experience. It was all to do with raw, unadulterated statistics.
And suddenly I saw the glint of gold. I saw a promise in statistics. Hiding beneath dusty logarithm lookup tables and hypothesis testing was the promise of an Oracle Machine. Something that could be queried and provide answers in milliseconds. This was knowledge like had never been seen before and yet it was nothing to do with knowledge, logic, semantics, or meaning. It was just numbers, just data and statistics and a query box. Ultimately the question in my mind was "how can this be?", and secondly, against my better judgement, "how can I get it?".
An internet search engine relies on the systematic de-construction and processing of text. The text is crippled; stripped of meaning until it is completely void and will fit into nice neat data structures for processing. Only then would the data shine through. And once the numbers were ready, the statistical algorithms could roll along and process the data. Finally the questions we all had could be answered in the blink of an eye. Building a search engine is not re-inventing the wheel, it is rediscovering the holy grail.
The first thing to go is syntax. The hierarchy of language, which structures and subjugates words into a towering tree, is unimportant under statistics. All web pages, documents, and queries are reformed and stored as jumbled lists of words. Context is not truly lost. Those words which often are together are still in association via their combined presence in a list. Everything is just a little more anarchic. The words have been freed of their sentences. There is no longer a primary verb, or a root pronoun. Under the statistical system all words are equal, and as you would expect, some are more equal than others.
The important words are those which do not occur often. "The" is largely a useless citizen; syntactic glue. No room is left in our system for such common words and where possible they are removed. The "aardvarks" and "armamentaria" are king, because you can be sure if they exist in a query then they must be key. So how are these statuses assigned? Not by some governing hand. We look toward the Laws of Text, Zipf's law and Heaps' law. These laws tell you, in beautiful fairness and balance, the relative importance of words in a language. Even the numbers and numerals can be governed using Benford's law. Nothing is left to chance, all is mathematical.
But all this begs the question. Do we really need words in the first place? Is this bureaucracy? Can something smaller suffice - say, a symbol, a letter? In languages such as Japanese, with no spaces to separate words, we can simply assume that each overlapping pair of symbols, as well as each individual symbol, is a word in its own right. As we accumulate more and more web pages and documents, the pairs which are actually words will continue to appear, while those which are not words will not. It soon becomes clear what is, and what isn't a word. This system, of taking N-grams, sets of symbols is effective. Even more effective than just splitting via words. Even in European languages. The reason is we can match policeman with policemen, even with no idea of the semantic relationship. Words are not required. A good search engine can use just symbols.
We note that in using N-Grams, the more documents you have the better. This is another devilish aspect of statistics. In statistics, more is more. You can never have too much data. The reason is simple. Signal adds up and noise cancels out. More precisely, when you have more data, the probability of something becoming statistically significant via chance is lessened, while the probability of something becoming statistically significant via actuality, is increased. In a logical formula, the smaller the formula the better. But in our search engine, the more websites scanned the better - even if what they contain is largely junk.
So now our documents are simply jumbled lists of words and their relative importance. We have spiders crawling the web and accumulating more data for us, and we have an index slowly ticking over and processing the document data. All that remains now is to design the statistical models via which we rate our documents for a given query. Because of our destruction of the text we can build effective data structures and feed them into a huge database. The final step is just to turn it on.
A human-free system. A system of knowledge automated by the cold clicking hands of a computer.
The secret in statistics is rather simple. The power it provides is a concise mantra. In logic, deduction and mathematical proof one can divulge true answer to precise questions. Statistics, on the other hand, can provide compelling answers to all questions. Statistics focuses on the question rather than the answer. As Douglas Adams revealed, if you wish to know the answer to the meaning of life the universe and everything, you must first know the question.
The difference can be shown with a trick. When queried with the question...
"How many legs does the average person have?"
- A logical system will answer ~1.99
- A statistical system will answer 2
A good logical system will know the answer to a question.
A good statistical system will know what question you are asking.
This is both the beauty and the danger of statistics. Much like a search engine, the goal of a statistical system is to tell you exactly what you want to hear. A statistical system does not answer a question with the precision and truth of a logical system, but it should capture the absolute intuition of what you are asking from it. It will know when "average" is translated to "mean" and when one really intends for the "mode". When a CEO asks his statistician "is the company doing good?", a good statistician will formalise and calculate the exact notion the CEO holds of "company doing good", and present it to the CEO.
The danger comes when the intuitive notion of "company doing good" differs from person to person. Perhaps the CEO is unconcerned with the variable counting toxic waste dumped, while a citizen rates this variable highly on their intuition of that evaluation. The power of statistics comes from its subjectiveness and lack of true meaning, but it is also its heel.
What really is the "mean" or the "standard deviation" other than the formalisation of some human intuition? In exact terms the mean is not "the average" because as we discussed above, that is a subjective and relative notion. The mean is only itself - that is the sum of all data points divided by the count of data points. The same is true for search engines. If I search for "The Best Page In The Universe" Google does not return the best page in the universe. It returns the tf.idf weighted sum of my query terms against its index including user ranked weights, individual behaviour weights and pagerank.
Statistics is not boring. Far from it. At its heart it is the beautiful and twisted cousin of logic and reason. It is deceptively powerful. It gives you the chance to throw your pennies in the well and get an answer back. Most of all, statistics is agnostic, subjective and human. Unlike the godlike sentience of logic and reason, statistics is the devil inside. For that reason I love it.
The famous hobbyist mathematician Pierre de Fermat had an amusing habit back in his time. He'd make these private breakthroughs about number theory or physics or geometry. Then he'd invent a problem that could be solved with his discovery and tease intelligent friends of his with the challenges. Through this challenge, he engaged in enjoyable communication with researchers while providing edification to his friends/victims. Project Euler (http://projecteuler.net/) is a modern-day Fermat, and to a lover of puzzles it is a highly addictive substance.
Project Euler is composed of a small collection of problems, all of which require some math to solve, and most of which are best-solved with the aid of a programming language. Some patient and skilled solvers use pencil and paper exclusively. Others even use Excel, which some computer scientists might sneer at. In the tradition of the hobbyist mathematician, little previous knowledge is needed to work through the problems. They start out simple and build on each other. Further, each correct answer unlocks a forum thread which provides greater insight from other solvers.
The site's name came from Colin Hughes who used the handle euler and started the project as a sub-section of a forum back in 2001. Now, many years past then, the project has its own site, and several people work together to make new and interesting problems, with one posted about every two weeks.
If you have no experience programming before Project Euler, you will quickly become familiar with the concept of exponential growth. Almost all problems give a test case in their description which can lead one into a false sense of early success. For instance, if you are asked to find all numbers under a million with a certain property, the test case might give the answer for all of the numbers under 100.
Many times a naive solution which will provide accurate results for the test case will run quickly, say in 10 seconds. But then, when tested for numbers through 1,000, it takes 100 seconds. And for numbers under 10,000 it takes 1,000 seconds. And then you may start wondering if it's worth over a day for you to get your (hopefully correct) answer. Often this means that you have to think about multiple ways of solving the same problem before you find a solution which is both correct and fast enough.
Personally, I got drawn into playing with Project Euler by a friend of mine who was bemoaning a difficult problem with a recursive solution. It was running insanely slowly, and he was sure something was wrong with his algorithm, since he was using memoization to reduce the recursive calls. I thought I'd be clever and write the solution using C++ template meta-programming, since memoization is so trivial in that paradigm. Not only that, my program would be a single line of code, and the executable would run in a blink of an eye because it was printing out a constant. All of the calculation time would have already happened during compilation.
For the test case, my solution worked perfectly. But for the actual problem? gcc crashed after two minutes of compiling and told me to submit a bug report. Whoops.
I solved it in Python instead.
As with any specialized field, otherwise-rare languages are often over-represented which are particularly well suited to the problem space. For code golf, Golfscript is the king of town. For Project Euler, there's more of a senate of odd-bodies, but the best performing solvers use Frink, PARI/GP, Magma, MUMPS, Mathematica, and J. If you have never heard of most of those, don't worry, neither had I. Frink, PARI/GP, Magma, and Mathematica are all software specifically intended to aid mathematical calculations, so they aren't so surprising.
J, however, is a different beast entirely. It is a functional programming language in the same tradition as APL. And it is worth special mention for the solutions written in it.
The solution forums provide a few cool possibilities. For one, they allow people to expand upon the mathematical theory that underlies the problem, why it was chosen, and so on. Sometimes one might stumble upon these things researching a problem in order to solve it, but there's nothing like the directed insight of someone with a passion for the subject. For another, it is a place for people to post the code they used to solve the problem and to explain what it does. However, in some cases, the explanation is left off.
J programmers, in particular, seem to be fans of making punctuation-dense code which is hard to decipher. Here is an example solution to one problem by solver VrAbi:
I would have warned you about spoilers if I had felt there was any risk a casual observer could decipher what was going on in that code.
Although I am vexed to a small degree by those who feel proud enough to display their code, but not generous enough to explain it, I do delight just a little. They are providing the same sort of teasing puzzle that I just solved, wrapped up in that very same problem. Devious.
As the Project notes itself, there are now solutions out on the web for most Project Euler problems, both code and answers. Part of the intent of having private forums is to prevent someone accidentally spoiling a problem with a Google search, and that's now lost. On the other hand, it's nice that if a problem really has you stumped, you can still learn something by reading through somebody else's thought process to solve it. It does take away part of the fun of the ladderclimb knowing that the rankings are rife with people who didn't do the hard work to get there, but there are always going to people who game an automated system.
Despite the drawbacks of the mystery-destroying aspects of the Internet, searching for theory and background on a problem can result in some fairly neat discoveries. Being able to do research as easily and quickly as writing a program or a proof has fundamentally changed the nature of learning about math.
In one of the many cases of exponential difficulties, I had programmed a correct solution. It worked beautifully for the test case of a few items, but slowed down to a crawl as it progressed. It was, as I calculated, likely to finish only a little while after I perished of old age.
Since this problem was based on finding elements in a series, I took the first 20 items of the series and searched for them in a mathematical database, the On-Line Encyclopedia of Integer Sequences. This gave a result which was exactly the series I needed with references to multiple papers on related subject, but only the first 100 elements. I was already generating that many terms on my own, so it didn't get me any closer to a solution. However, I then read about these subjects on MathWorld, and I used that knowledge to vastly improve my algorithm and solve the problem. A combination of independent analysis and Internet-aided study won the day in a very pleasing evolution.
Although lulled in by the idea of doing a few neat math tricks to show off to my friend, I've continued to solve problems on Project Euler for a number of reasons, and I think they apply to anyone with an interest in solving math puzzles.
- Firstly, it is a great way to learn more about a programming language. Each problem is small enough to not be daunting, but introduces some novelty which often requires using a new facility of the language. Thus far I've solved 85 problems all using Python, a language I did not know at the start of finding Project Euler, and with which I am now surprisingly comfortable.
- Secondly, the problems themselves are intriguing, teaching things about not just mathematics but also programming tricks and optimizations in general. It is simply not possible to solve some of the problems without learning important lessons in algorithms. The problems did start out very easy for me due to some prior experience with similar challenges — I breezed through the first 49 in one day, at which point the site told me to chill out for a while. But almost all interested parties will eventually meet problems with difficulty matching or exceeding their current capabilities.
- Thirdly, it's self-paced. You can race to solve the latest problems for prestige, casually pick out interesting looking items from the middle, or step your way through all 300+, learning along the way. You can pick up any problem at any time with only the time constraints you place on yourself.
- Finally, there is this little bar on the site which slowly fills up as you get correct answers. Mmmmm, experience points.
If any of this sounded intriguing, I'd suggest you give Project Euler a shot. I find it's the most professionally-justifiable entertainment I engage in.