OR: Guaranteed way to win \$1 by betting on a sequence of coin tosses

A somewhat strange setup, based on betting "double or nothing" on a fair coin with a martingale. While not a paradox in the strict sense of the word, the name was given in the 19th century, and has since stuck.

This is the precise setup: you may wager any sum of money against the bank. The bank then flips a fair coin. If it comes up heads, you lose your stake; otherwise, you win the amount of your stake (i.e. if you bet \$3 and the coin came up tails, you'll have \$3 more than when you started). Under these conditions, a win of \$1 (or any other sum) is guaranteed by the following:

1. Stake \$1. If you win, you've won \$1 -- leave the game.
2. So now you're \$1 down. Stake \$2! If you win, your net gain is \$2-\$1=\$1, so you've won \$1 -- leave the game.
3. So now you're \$3 down. Double your stake again, to \$4! If you win, your net gain is \$4-\$3=\$1, so you've won \$1 -- leave the game.
...

If you reach step k>1, you've lost \$2k-1-1, and you wager \$2k-1. So the process may continue. Note, moreover, that the only way of the process not terminating is if all the coin tosses are heads -- something which occurs with probability 0.

So you win (almost surely -- this means roughly "always") \$1 every time you do this!

Clearly, something here should be wrong. But a great deal of probability theory had to be worked out to straighten things out!

OR: Guaranteed way to fool gullible people into losing lots of money

This "paradox" is quite simple to straighten out. Yes, with this method, your probability of not winning \$1 halves every step (i.e. approaches zero exponentially). However, at the same time, the amount of money necessary to be able to continue after a loss doubles after every step.

Let's put it like this: the "profitability" of a game is the average amount you win each round. Let's call this G (for goodness). It is equal to the probability of winning P, times the amount to be won W, minus the probability of losing Q, times the amount to be lost L.

P*W - Q*L = G
In the case of a single coin toss, it's simple
½*1 - ½*1 = 0
i.e. you will, on average, break even. What about 2 coin tosses in series, with the second being conducted only if the first is lost, with twice the amount being bet?
¾*1 - ¼*3 = 0
So it's not really any better. What about n coin tosses in series?
(2n-1)/2n * 1 - 1/2n * (2n-1) = 0
Again no difference. The increasingly tiny chance of losing is exactly cancelled out by the increasingly huge amount to be lost, while the amount to be won doesn't change. And that's the theoretical, asymptotical result. In reality, you will eventually (and sooner than you think) end up in a situation where you cannot double the bet because all your money is gone and nobody is stupid enough to loan you any more. And then you're fucked quite thoroughly.

Basically, the belief that one "cannot lose" with this doubling strategy is based on a lack of understanding of how mathematical limits work and/or the skewed perception all too typical for gambling addicts where one only looks at the potential for winning and ignores the consequences of losing.

Ariels' writeup above gives a description of a Martingale System to guarantee winning a fixed amount; but the original St. Petersburg paradox is in fact concerned with a slightly different setup- a game of chance seemingly offering infinite potential winnings- yet one which you would be hard pressed to find someone willing to offer a £100 to play.

The St Petersburg Paradox was first described by Daniel Bernoulli (one of many mathematically-notable Bernoullis) in a paper published by St. Petersburg Academy in 1738 (hence the name; Bernoulli had spent time in St. Petersburg as a mathematics professor and during that time profitably worked with Euler.); although the focus of the paper, entitled "Exposition of a new theory of the measurement of risk" was on probability and expectation in general and not just the so-called paradox.

The scenario is as follows- a game is to be played which involves tossing a coin until a head is obtained. If on the first attempt a head is obtained, you win 1 ducat. If not, then the coin is tossed again- with a stake of 2 ducats. Each time a tail is obtained, the stake is doubled and the process repeated, until, eventually, the coin lands heads up.

Assuming a fair coin, how much would you be prepared to pay to play this game?

The standard mathematical approach to such a question (at least as far as undergraduate stats level) is to calculate the expected return- an average of all possible outcomes, weighted according to the odds of them occurring. For a much simpler example, if I offered you £1 for an outcome of heads and £2 for an outcome of tails, then you'd expect for a fair coin to win on average £1.50; so you'd gladly play if I was charging less than this as (at least in the long run) you should come out ahead. This value hasn't simply been intuitively grabbed out of thin air; it's ½*£1 + ½*£2. To see if you understand what's going on, see if you can verify that the expected (in a mathematical sense) score for one roll of a die is 3.5.

What happens if we apply an expectation calculation to the St. Petersburg paradox?

The probability of winning 1 ducat is ½ (the coin lands on heads for the first throw). Failing that, we win 2 ducats if the second toss yields a head, but the previous one was tails, an event of probability 1/4. Continuing in this vein, we see that the odds of winning 2n ducats is 1/2n+1 (as we are specifying the outcome of n+1 events of probability ½). So our sum looks like this-

Expectation = (1/2)*1 + (1/4)*2 + (1/8)*4 +…..
= ½ + ½ + ½ +…

Or more precisely, if you excuse the painful HTML rendering: Σn=1n)*2n-1 = ½Σn=1n-1)2n-1 = ½Σn=1 1 = ∞

In other words, if we really intend the game to last until a head is obtained, our expected payout is infinite! But before you try such a gamble down the pub, consider this- three quarters of the time, your winnings will be just 2 ducats or less (well, more probably 2 quid/dollars, depending on location); and your chance of winning over 100 ducats depends on about a four in a thousand chance. (Specifically, 255/256ths of the time you get 128 ducats or less.)

Bernoulli's solution revived ideas that would prove revolutionary for economics (I say revived since the Swiss mathematician Cramer independently came up with the same solution some ten years before Bernoulli). The idea was to introduce the concepts of expected utility and diminishing marginal utility. Within this framework, utility from wealth is not linearly related to wealth, but suffers from a decreasing rate of increase relative to wealth. Thus although the prizes increase in size at an impressive enough rate to ensure an infinite expectation, what we should really be considering is the utility of each payout: some lesser amount that takes into account the diminishing marginal utility.

For instance, with a utility model that claims that utility is simply the (base 10) logarithm of the raw cash amount, we get the following sum-

½*ln(1) + ¼* ln(2) + (1/8)*ln(4) + …
= ½*0 + ¼*0.301 + (1/8)*0.602 +…

Where the numerators (utility) grow more slowly than the denominators (growing exponentially in base 2) and thus our sequence turns out to have a non-infinite limit. Various means of damping the value to provide a utility measure have been suggested which achieve this goal, from the simple (Cramer suggests simply cutting off at a certain value, that is having a maximum effective payout) to the more complex- square roots or logarithms as used above. In Bernoulli's formulation, the value of the game depended on the wealth you had prior to playing, with only an infinitely-wealthy man expecting infinite utility (and even in that case, if you can consider ratios of infinities, the payout was a tiny proportion of the original wealth).

Somewhat akin to constructing diagonal arguments, it turns out that these solutions are only of merit for the stated problem (or equivalently scaled weightings). For if we set the payouts based not on exponential growth but rather the growth necessary to offset the decline in utility, whatever the model, we can effectively engineer a new game that now has both infinite expectation and infinite expected utility. We could repeat the process with a more powerful model of utility decline, but that would always be open to an even more outrageously priced version of the game.

The standard criticism of this game is that no-one can truthfully offer it to you, due to the chance of an unbounded payout. Since no bank could cover that, they'd have to demand an infinite down-payment from you first to cover your later winnings: that is, the only fair price for an infinite expectation game is itself an infinite amount. If you have that to hand, why bother playing? Furthermore, the amount of money in the world (and, more trivially, the length of life that can be dedicated to flipping coins) is bounded above, so the scenario is perhaps just too unrealistic (especially in modified forms to account for utility).

So the St. Petersburg paradox is an interesting insight into the conflicts between mathematical theory and the practical realities of finance; and can also lead into ideas such as risk aversion (have you decided how much you'd be prepared to pay yet? Would it make a difference if you had more money?). It's even been linked to the growth of stocks- by changing the probability structure, the expectation series takes on the form of a discounted series of dividend payments.

References
For the original (save for translation) introduction to the paradox by Bernoulli, see http://www.math.fau.edu/Richman/Ideas/petersburg.htm, which itself quotes from the full text which was published in Econometrica 22 (1954) 123-136

"St Petersburg Paradox and growth stocks" - Journal of Finance; Sep57, Vol. 12 Issue 3, p348-364

Log in or register to write something here or to contact authors.