Freddy’s story: part 2

In a previous post I discussed a problem that Freddy.Teuma@smartodds.co.uk had written to me about. The problem was a simplified version of an issue sent to him by friend, connected with a genetic algorithm for optimisation. Simply stated: you start with £100. You toss a coin and if it comes up tails you lose 25% of your current money, otherwise you gain 25%. You play this game over and over, always increasing or increasing your current money by 25% on the basis of a coin toss. The issue is how much money you expect to have, on average, after 1000 rounds of this game.

As I explained in the original post, Freddy’s intuition was that the average should stay the same at each round. So even after 1000 (or more) rounds, you’d have an average of £100. But when Freddy simulated the process, he always got an amount close to £0, and so concluded his intuition must be wrong.

A couple of you wrote to give your own interpretations of this apparent conflict, and I’m really grateful for your participation. As it turns out, Freddy’s intuition was spot on, and his argument was pretty much a perfect mathematical proof. Let me make the argument just a little bit more precise.

Suppose after n rounds the amount of money you have is M. Then after n+1 rounds you will have (3/4)M if you get a Head and (5/4)M if you get a Tail. Since each of these outcomes is equally probable, the average amount of money after n+1 rounds is

\frac{ (3/4)M + (5/4)M}{2}= M

In other words, exactly as Freddy had suggested, the average amount of money doesn’t change from one round to the next. And since I started with £100, this will be the average amount of money after 1 round, 2 rounds and all the way through to 1000 rounds.

But if Freddy’s intuition was correct, the simulations must have been wrong.

Well, no. I checked Freddy’s code – a world first! – and it was perfect. Moreover, my own implementation displayed the same features as Freddy’s, as shown in the previous post: every simulation has the amount of money decreasing to zero long before 1000 rounds have been completed.

So what explains this contradiction between what we can prove theoretically and what we see in practice?

The following picture shows histograms of the money remaining after a certain number of rounds for each of 100,000 simulations. In the previous post I showed the individual graphs of just 16 simulations of the game; here we’re looking at a summary of 100,000 simulated games.

For example, after 2 rounds, there are only 3 possible outcomes: £56.25, £93.75 and £156.25. You might like to check why that should be so. Of these, £93.75 occurred most often in the simulations, while the other two occurred more or less equally often. You might also like to think why that should be so. Anyway, looking at the values, it seems plausible that the average is around £100, and indeed the actual average from the simulations is very close to that value. Not exact, because of random variation, but very close indeed.

After 5 rounds there are more possible outcomes, but you can still easily convince yourself that the average is £100, which it is. But once we get to 10 rounds, it starts to get more difficult. There’s a tendency for most of the simulated runs to give a value that’s less than £100, but then there are relatively few observations that are quite a bit bigger than £100. Indeed, you can just about see that there is one or more value close to £1000 or so. What’s happening is that the simulated values are becoming much more asymmetric as the number of rounds increases. Most of the results will end up below £100 – though still positive, of course – but a few will end up being much bigger than £100. And the average remains at £100, exactly as the theory says it must.

After 100 rounds, things are becoming much more extreme. Most of the simulated results end up close to zero, but one simulation (in this case) gave a value of around £300,000. And again, once the values are averaged, the answer is very close to £100.

But how does this explain what we saw in the previous post? All of the simulations I showed, and all of those that Freddy looked at, and those his friend obtained, showed the amount of money left being essentially zero after 1000 rounds. Well, the histogram of results after 1000 rounds is a much, much more extreme case of the one shown above for 100 rounds. Almost all of the probability is very, very close to zero. But there’s a very small amount of probability spread out up to an extremely large value indeed, such that the overall average remains £100. So almost every time I do a simulation of the game, the amount of money I have is very, very close to zero. But very, very, very occasionally, I would simulate a game whose result was a huge amount of money, such that it would balance out all of those almost-zero results and give me an answer close to £100. But, such an event is so rare, it might take billions of billions of simulations to get it. And we certainly didn’t get it in the 16 simulated games that I showed in the previous post.

So, there is no contradiction at all between the theory and the simulations. It’s simply that when the number of rounds is very large, the very large results which could occur after 1000 rounds, and which ensure that the average balances out to £100, occur with such low probability that we are unlikely to simulate enough games to see them. We therefore see only the much more frequent games with low winnings, and calculate an average which underestimates the true value of £100.

There are a number of messages to be drawn from this story:

  1. Statistical problems often arise in the most surprising places.
  2. The strategy of problem simplification, solution through intuition, and verification through experimental results is a very useful one.
  3. Simulation is a great way to test models and hypotheses, but it has to be done with extreme care.
  4. And if there’s disagreement between your intuition and experimental results, it doesn’t necessarily imply either is wrong. It may be that the experimental process has complicated features that make results unreliable, even with a large number of simulations.

Thanks again to Freddy for the original problem and the discussions it led to.


To be really precise, there’s a bit of sleight-of-hand in the mathematical argument above. After the first round my expected – rather than actual – amount of money is £100. What I showed above is that the average money I have after any round is equal to the actual amount of money I have at the start of that round. But that’s not quite the same thing as showing it’s equal to the average amount of money I have at the start of the round.

But there’s a famous result in probability – sometimes called the law of iterated expectations – which lets me replace this actual amount at the start of the second round with the average amount, and the result stays the same. You can skip this if you’re not interested, but let me show you how it works.

At the start of the first round I have £100.

Because of the rules of the game, at the end of this round I’ll have either £75 or £125, each with probability 1/2.

In the first case, after the second round, I’ll end up with either £56.25 or £93.75, each with probability 1/2. And the average of these is £75.

In the second case, after the second round, I’ll end up with either £93.75 or £125.75, each with probability 1/2. And the average of these is £125.

And if I average these averages I get £100. This is the law of iterated expectations at work. I’d get exactly the same answer if I averaged the four possible 2-round outcomes: £56.25, £93.75 (twice) and £125.75.

Check:

\frac{56.25 + 93.75 + 93.75 + 125.75}{4} = 100

So, my average after the second round is equal to the average after the first which was equal to the initial £100.

The same argument also applies at any round: the average is equal to the average of the previous round. Which in turn was equal to the average of the previous round. And so on, telescoping all the way back to the initial value of £100.

So, despite the sleight-of-hand, the result is actually true, and this is precisely what Freddy had hypothesised. As explained above, his only ‘mistake’ was to observe that a small number of simulations suggested a quite different behaviour, and to assume that this meant his mathematical reasoning was wrong.

 

Freddy’s story: part 1

This is a great story with a puzzle and an apparent contradiction at the heart of it, that you might like to think about yourself.

A couple of weeks ago Freddy.Teuma@smartodds.co.uk wrote to me to say that he’d been looking at the recent post which discussed a probability puzzle based on coin tossing, and had come across something similar that he thought might be useful for the blog. Actually, the problem Freddy described was based on an algorithm for optimisation using genetic mutation techniques, that a friend had contacted him about.

To solve the problem, Freddy did four smart things:

  1. He first simplified the problem to make it easier to tackle, while still maintaining its core elements;
  2. He used intuition to predict what the solution would be;
  3. He supported his intuition with mathematical formalism;
  4. He did some simulations to verify that his intuition and mathematical reasoning were correct.

This is exactly how a statistician would approach both this problem and problems of greater complexity.

However… the pattern of results Freddy observed in the simulations contradicted what his intuition and mathematics had suggested would happen, and so he adjusted his beliefs accordingly. And then he wrote to me.

This is the version of the problem that Freddy had simplified from the original…

Suppose you start with a certain amount of money. For argument’s sake, let’s say it’s £100. You then play several rounds of a game. At each round the rules are as follows:

  1. You toss a fair coin (Heads and Tails each have probability 1/2).
  2. If the coin shows Heads, you lose a quarter of your current amount of money and end up with 3/4 of what you had at the start of the round.
  3. If the coin shows Tails, you win a quarter of your current amount of money and end up with 5/4 of what you had at the start of the round.

For example, suppose your first 3 tosses of the coin are Heads, Tails, Heads. The money you hold goes from £100, to £75 to £93.75 to £70.3125.

Now, suppose you play this game for a large number of rounds. Again, for argument’s sake, let’s say it’s 1000 rounds. How much money do you expect to have, on average, at the end of these 1000 rounds?

Have a think about this game yourself, and see what your own intuition suggests before scrolling down.

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

Freddy’s reasoning was as follows. In each round of the game I will lose or gain 25% of my current amount of money with equal probability. So, if I currently have £100, then at the end of the next round I will have either £75 or £125 with equal probability. And the average is still £100. This reasoning is true at each round of the game. And so, after any number of rounds, including 1000, I’d expect to have exactly the same amount of money as when I started: £100.

But when Freddy simulated the process, he found a different sort of behaviour. In each of his simulations, the money held after 1000 rounds was very close to zero, suggesting that the average is much smaller than £100.

I’ve taken the liberty of doing some simulations myself: the pattern of results in 16 repeats of the game, each time up to 1000 rounds,  is shown in the following figure.

Each panel of the figure corresponds to a repeat of the game, and in each repeat I’ve plotted a red trace showing how much money I hold after each round of the game.  In each case you can see that I start with £100, there’s then a bit of oscillation – more in some of the realisations than in others, due to random variation – but in all cases the amount of money I have hits something very close to zero somewhere before 250 rounds and then stays there right up to 1000 rounds.

So, there is indeed a conflict between Freddy’s intuition and the picture that these simulations provide.

What’s going on?

I’ll leave you to think about it for a while, and write with my own explanation and discussion of the problem in a future post. If you’d like to write to me to explain what you think is happening, I’d be very happy to hear from you.

Obviously, I’m especially grateful to Freddy for having sent me the problem in the first place, and for agreeing to let me write a post about it.


Update: if you’d like to run the simulation exercise yourself, just click the ‘run’ button in the following window. This will simulate the game for 1000 rounds, starting with £100. The graph will show you how much money you hold after each round of the game, while if you toggle to the console window it will tell you how much money you have after the 1000th round (to the nearest £0.01). This may not work in all browsers, but seems to work ok in Chrome. You can repeat the experiment simply by clicking ‘Run’ again. You’re likely to get a different graph each time because of the randomness in the simulations. But what about the final amount? Does that also change? And what does it suggest about Freddy’s reasoning that the average amount should stay equal to £100?

game_sim<-function(n_rounds=1000, money_start=100){ require(ggplot2) money<-c() money[1]<-money_start for(i in 2:(n_rounds)){ money[i]<-money[i-1]*sample(c(.75,1.25),1) } m<-data.frame(round=1:n_rounds,money=money) cat('Money in pounds after ',n_rounds, ' rounds is ',round(money[n_rounds],2)) ggplot(aes(x=round,y=money),data=m)+geom_line(color='red')+ ggtitle('Money') } game_sim()

Taking things to extremes

One of the themes I’ve tried to develop in this blog is the connectedness of Statistics. Many things which seem unrelated, turn out to be strongly related at some fundamental level.

Last week I posted the solution to a probability puzzle that I’d posted previously. Several respondents to the puzzle, including Olga.Turetskaya@smartodds.co.uk, included the logic they’d used to get to their answer when writing to me. Like the others, Olga explained that she’d basically halved the number of coins in each round, till getting down to (roughly) a single coin. As I explained in last week’s post, this strategy leads to an answer that is very close to the true answer.

Anyway, Olga followed up her reply with a question: if we repeated the coin tossing puzzle many, many times, and plotted a histogram of the results – a graph which shows the frequencies of the numbers of rounds needed in each repetition – would the result be the typical ‘bell-shaped’ graph that we often find in Statistics, with the true average sitting somewhere in the middle?

Now, just to be precise, the bell-shaped curve that Olga was referring to is the so-called Normal distribution curve, that is indeed often found to be appropriate in statistical analyses, and which I discussed in another previous post. To answer Olga, I did a quick simulation of the problem, starting with both 10 and 100 coins. These are the histograms of the results.

So, as you’d expect, the average values (4.726 and 7.983 respectively) do indeed sit nicely inside the respective distributions. But, the distributions don’t look at all bell-shaped – they are heavily skewed to the right. And this means that the averages are closer to the lower end than the top end. But what is it about this example that leads to the distributions not having the usual bell-shape?

Well, the normal distribution often arises when you take averages of something. For example, if we took samples of people and measured their average height, a histogram of the results is likely to have the bell-shaped form. But in my solution to the coin tossing problem, I explained that one way to think about this puzzle is that the number of rounds needed till all coins are removed is the maximum of the number of rounds required by each of the individual coins. For example, if we started with 3 coins, and the number of rounds for each coin to show heads for the first time was 1, 4 and 3 respectively, then I’d have had to play the game for 4 rounds before all of the coins had shown a Head. And it turns out that the shape of distributions you get by taking maxima is different from what you get by taking averages. In particular, it’s not bell-shaped.

But is this ever useful in practice? Well, the Normal bell-shaped curve is somehow the centrepiece of Statistics, because averaging, in one way or another, is fundamental in many physical processes and also in many statistical operations. And in general circumstances, averaging will lead to the Normal bell-shaped curve.

Consider this though. Suppose you have to design a coastal wall to offer protection against sea levels. Do you care what the average sea level will be? Or you have to design a building to withstand the effects of wind. Again, do you care about average winds? Almost certainly not. What you really care about in each case will be extremely large values of the process: high sea-levels in one case; strong winds in the other. So you’ll be looking through your data to find the maximum values – perhaps the maximum per year – and designing your structures to withstand what you think the most likely extreme values of that process will be.

This takes us into an area of statistics called extreme value theory. And just as the Normal distribution is used as a template because it’s mathematically proven to approximate the behaviour of averages, so there are equivalent distributions that apply as templates for maxima. And what we’re seeing in the above graphs – precisely because the data are derived as maxima – are examples of this type. So, we don’t see the Normal bell-shaped curve, but we do see shapes that resemble the templates that are used for modelling things like extreme sea levels or wind speeds.

So, our discussion of techniques for solving a simple probability puzzle with coins, leads us into the field of extreme value statistics and its application to problems of environmental engineering.

But has this got anything to do with sports modelling? Well, the argument about taking the maximum of some process applies equally well if you take the minimum. And, for example, the winner of an athletics race will be the competitor with the fastest – or minimum – race time. Therefore the models that derive from extreme value theory are suitable templates for modelling athletic race times.

So, we moved from coin tossing to statistics for extreme weather conditions to the modelling of race times in athletics, all in a blog post of less than 1000 words.

Everything’s connected and Statistics is a very small world really.

Heads up

heads

I recently posted a problem that had been shown to me by Benoit.Jottreau@smartodds.co.uk. Basically, you have a bunch of coins. You toss them and remove the ones that come up heads. You then repeat this process over and over till all the coins have been removed. The question was, if you start with respectively 10 or 100 coins, how many rounds of this game does it take on average till all the coins have been removed?

I’m really grateful to all of you who considered the problem and sent me a reply. The answers you sent me are summarised in the following graphs.

 

 

The graph on the left shows the counts of the guesses for the number of rounds needed when starting with 10 coins; the one on the right is the counts  but starting with 100 coins. The main features are as follows:

  • Starting with 10 coins, the most popular answer was 4 rounds; with 100 coins the most popular answer was either 7 or 8 rounds.
  • Almost everyone gave whole numbers as their answers. This wasn’t necessary. Even though the result of every experiment has to be a whole number, the average doesn’t. In a similar way, the average number of goals in a football match is around 2.5.
  • The shape of the distribution of answers for the two experiments is much the same: heavily skewed to the right. This makes sense given the nature of the experiment: we can be pretty sure a minimum number of rounds will be needed, but less sure about the maximum. This is reflected in your collective answers.
  • Obviously, with more coins, there’s more uncertainty about the answer, so the spread of values is much greater when starting with 100 coins.

Anyway, I thought the replies were great, and much better than I would have come up with myself if I’d just gone with intuition instead of solving the problem mathematically.

A few people also kindly sent me the logic they used to get to these answers. And it goes like this…

Each coin will come up heads or tails with equal probability. So, the average number of coins that survive each round is half the number of coins that enter that round. This is perfectly correct. So, for example, when starting with 10 coins, the average number of coins in the second round is 5. By the same logic, the average number of coins in the second round is 2.5. And the average number of coins in the third round is 1.25. And in the fourth round it’s 0.625. So, the first time the average number of coins goes below 1 is on the fourth round, and it’s therefore reasonable to assume 4 is the average number of rounds for all the coins to be removed.

Applying the same logic but starting with 100 coins, it takes 7 rounds for the average number of coins to fall below 1.

With a slight modification to the logic, to always round to whole numbers, you might get slightly different answers: say 5 and 8 instead of 4 and 7. And looking at the answers I received, I guess most respondents applied an argument of this type.

This approach is really great, since it shows a good understanding of the main features of the process: 50% of coins dropping out, on average, at each round of the game. And it leads to answers that are actually very informative: knowing that I need 4 rounds before the average number of coins drops below 1 is both useful and very precise in terms of explaining the typical behaviour of this process.

However… you don’t quite get the exact answers for the average number of rounds, which are 4.726 when you start with 10 coins, and 7.983 when you start with 100 coins. But where do these numbers come from, and why doesn’t the simple approach of dividing by 2 until you get below 1 work exactly?

Well, as I wrote above, starting with 10 coins you need 4 rounds before the average number of coins falls below 1. But this is a statement about the average number of coins. The question I actually asked was about the average number of rounds. Now, I don’t want to detract from the quality of the answers you gave me. The logic of successively dividing by 2 till you get below one coin is great, and as I wrote above, it will give an answer that is meaningful in its own right, and likely to be close to the true answer. But, strictly speaking, it’s focussing on the wrong aspect of the problem: the number of coins instead of the number of rounds.

The solution is quite technical. Not exactly rocket science, but still more intricate than is appropriate for this blog. But you might still find it interesting to see the strategy which leads to a solution.

So, start by considering just one of the coins. Its pattern of results (writing H for Heads and T for Tails) will be something like

  • T, T, H; or
  • T, T, T, H; or
  • H

That’s to say, a sequence of T’s followed by H (or just H if we get Heads on the first throw).

But we’ve seen something like this before. Remember the post Get out of jail? We kept rolling a dice until we got the first 6, and then stopped. Well, this is the same sort of experiment, but with a coin. We keep tossing the coin till we get the first Head. Because of the similarity between these experiments, we can apply the same technique to calculate the probabilities for the number of rounds needed to get the first Head for this coin. One round will have probability 1/2, two rounds 1/4, three rounds 1/8 and so on.

Now, looking at the experiment as a whole, we have 10 (or 100) coins, each behaving the same way. And we repeat the experiment until all of the coins have shown heads for the first time. What this means is that the total number of rounds needed is the maximum of the number of rounds for each of the individual coins. It turns out that this gives a simple method for deriving a formula that gives the probabilities of the number of rounds needed for all of the coins to be removed, based on the probabilities for a single coin already calculated above.

So, we now have a formula for the probabilities for the numbers of rounds needed. And there’s a standard formula for converting this formula into the  average. It’s not immediately obvious when you see it, but with a little algebraic simplification it turns out that you can get the answer in fairly simple mathematical form. Starting with n coins – we had n=10 and n=100 –  the average number of rounds needed turns out to be

1+\sum_{k=1}^n(-1)^{k-1}{n \choose k} (2^k-1)^{-1}

The \sum bit means do a sum, and the ~{n \choose k}~ term is the number of unique combinations of k objects chosen from n. But don’t worry at all about this detail; I’ve simply included the answer to show that there is a formula which gives the answer. 

With 10 coins you can plug n=10 into this expression to get the answer 4.726. With 100 coins there are some difficulties, since the calculation of ~{n \choose k}~  with n=100 is numerically unstable for many values of k. But accurate approximations to the solution are available, which don’t suffer the same numerical stability problems, and we get the answer 7.983.

So, in summary, with a fair bit of mathematics you can get exact answers to the problem I set. But much more importantly, with either good intuition or sensible reasoning you can get answers that are very similar. This latter skill is much more useful in Statistics generally, and it’s fantastic that the set of replies I received showed collective strength in this respect.

Heads dropping

coins

Here’s a fun probability problem that Benoit.Jottreau@smartodds.co.uk showed me. If you’re clever at probability, you might be able to solve it exactly; otherwise it’s easy to simulate. But as with previous problems of this type, I think it’s more interesting to find out what you would guess the answer to be, without thinking about it too deeply.

So, suppose you’ve got 10 coins. They’re fair coins, in the sense that if you toss any of them, they’re equally likely to come up heads or tails. You toss all 10 coins. You then remove the ones that come up heads. The remaining ones – the ones that come up tails – you toss again in a second round. Again, you remove any that come up heads, and toss again the ones that come up tails in a third round. And so on. In each round, you remove the coins that come up heads, and toss again the coins that come up tails. You stop once all of the coins have been removed.

The question: on average, how many rounds of this game do you need before all of the coins have been removed?

There are different mathematical ways of approaching this problem, but I’m not really interested in those. I’m interested in how good we are, collectively, at using our instincts to guess the solution to a problem of this type. So, I’d really appreciate it if you’d send me your best guess.

Actually, let’s make it a little more interesting. Can you send me an answer to a second question as well?

Second question: same game as above, but starting with 100 coins. This time, on average, how many rounds do you need before all of the coins have been removed?

Please send your answers to me directly or via this survey form.

I’ll discuss the answers you (hopefully) send me, and the problems themselves in more detail, in a subsequent post.

Please don’t fill out the survey if you solved the problem either mathematically or by simulation, though if you’d like to send me your solutions in either of those cases, I’d be very happy to look at them and discuss them with you.

 

Needles, noodles and 𝜋

A while back, on Pi Day, I sent a post celebrating the number 𝜋 and mentioned that though 𝜋 is best known for its properties in connection with the geometry of a circle, it actually crops up all over the place in mathematics, including Statistics.

Here’s one famous example…

Consider a table covered with parallel lines like in the following figure.

linesFor argument’s sake, let’s suppose the lines are 10 cm apart. Then take a bunch of needles – or matches, or something similar – that are 5 cm in length, drop them randomly onto the table, and count how many intersect one of the lines on the table.  Let’s suppose there are N needles and m of them intersect one of the lines. It turns out that N/m will be approximately 𝜋, and that the approximation is likely to improve if we repeat the experiment with a bigger value of N.

What this means in practice is that we have a statistical way of calculating 𝜋. Just do the experiment described above, and as we get through more and more needles, so the calculation of N/m is likely to lead to a better and better approximation of 𝜋.

There are various apps and so on that replicate this experiment via computer simulation, including this one, which is pretty nice. The needles which intersect any of the lines are shown in red; the others remain blue. The ratio N/m is shown in real-time, and if you’re patient enough it should get closer to the true value of 𝜋, the longer you wait. The approximation is also shown geometrically – the ratio N/m is very close to the ratio of a circle’s circumference to its diameter.

One important point though: the longer you wait, the greater will be the tendency for the approximation N/m to improve. However,  because of random variation in individual samples, it’s not guaranteed to always improve. For a while, the approximation might get a little worse, before inevitably (but perhaps slowly) starting to improve again.

In actual fact, there’s no need for the needles in this experiment to be half the distance between the lines. Suppose the ratio between the line separation and the needle length is r, then 𝜋 is approximated by

\hat{\pi} = \frac{2rN}{m}

In the simpler version above, r=1/2, which leads to the above result

\hat{\pi} = \frac{N}{m}

Now, although Buffon’s needle provides a completely foolproof statistical method of calculating 𝜋, it’s a very slow procedure. You’re likely to need very many needles to calculate 𝜋 to any reasonable level of accuracy. (You’re likely to have noticed this if you looked at the app mentioned above). And this is true of many statistical simulation procedures: the natural randomness in experimental data means that very large samples may be needed to get accurate results. Moreover, every time you repeat the experiment, you’re likely to get a different answer, at least to some level of accuracy.

Anyway… Buffon’s needle takes its name from Georges-Louis Leclerc, Comte de Buffon, a French mathematician in the 18th century who first posed the question of what the probability would be for a needle thrown at random to intersect a line. And Buffon’s needle is a pretty well-known problem in probability and Statistics.

Less well-known, and even more remarkable, is Buffon’s noodle problem. Suppose the needles in Buffon’s needle problem are allowed to be curved. So rather than needles, they are noodles(!) We drop N noodles – of possibly different shapes, but still 5 cm in length – onto the table, and count the total number of times the noodles cross a line on the table. Because of the curvature of the noodles, it’s now possible that a single noodle crosses a line more than once, so m is now the total number of line crossings, where the contribution from any one noodle might be 2 or more. Remarkably, it turns out that despite the curvature of the noodles and despite the fact that individual noodles might have multiple line crossings, the ratio N/m still provides an approximation to 𝜋 in exactly the same way it did for the needles.

This result for Buffon’s noodle follows directly from that of Buffon’s needle. You might like to try to think about why that is so. If not, you can find an explanation here.


Finally, a while back, I sent a post about Mendelian genetics. In it I discussed how Mendel used a statistical analysis of pea experiments to develop his theory of genetic inheritance. I pointed out, though, that while the theory is undoubtedly correct, Mendel’s statistical results were almost certainly too good to be true. In other words, he’d fixed his results to get the experimental results which supported his theory. Well, there’s a similar story connected to Buffon’s needle.

In 1901, an Italian mathematician, Mario Lazzarini, carried out Buffon’s needle experiment with a ratio of r=5/6. This seems like a strangely arbitrary choice. But as explained in Wikipedia, it’s a choice which enables the approximation of 355/113, which is well-known to be an extremely accurate fractional approximation for 𝜋. What’s required to get this result is that in a multiple of 213 needle throws, the same multiple of 113 needles intersect a line. In other words, 113 intersections when throwing 213 needles. Or 226 when throwing 426. And so on.

So, one explanation for Lazzarini’s remarkably accurate result is that he simply kept repeating the experiment in multiples of 213 throws until he got the answer he wanted, and then stopped. Indeed, he reported a value of N=3408, which happens to be 16 times 213. And in those 3408 throws, he reportedly got 1808 line intersections, which happens to be 16 times 113.

An alternative explanation is that Lazzarini didn’t do the experiment at all, but pretended he did with the numbers chosen as above so as to force the result to be the value that he actually wanted it to be. I know that doesn’t seem like a very Italian kind of thing to do, but there is some circumstantial evidence that supports this possibility. First, as also explained in Wikipedia:

A statistical analysis of intermediate results he reported for fewer tosses leads to a very low probability of achieving such close agreement to the expected value all through the experiment.

Second, Lazzarini reportedly described a physical machine that he used to carry out the experimental needle throwing. However, a basic study of the design of this machine shows it to be impossible from an engineering point of view.

So, like Mendel, it’s rather likely that Lazzarini invented some data from a statistical experiment just to get the answer that he was hoping to achieve. And the moral of the story? If you’re going to make evidence up to ‘prove’ your answer, build a little bit of statistical error into the answer itself, otherwise you might find statisticians in 100 years’ time proving (beyond reasonable doubt) you cheated.

Love Island

A while back Harry.Hill@smartodds.co.uk gave a talk to the (then) quant team about trading strategies. The general issue is well-known: traders have to decide when to place a bet. Generally speaking they can place a bet early, when the price – the amount you get if you win the bet – is likely to be reasonably attractive. But in that case the liquidity of the market – the amount of money you can bet against – is likely to be low. Or they can wait until there is greater liquidity, but then the price is likely to be less attractive. So, given the option of a certain bet size at a stated price, should they bet now or wait in the hope of being able to make a bigger bet, albeit at a probably poorer price?

In general this is a difficult problem to tackle, and to make any sort of progress some assumptions have to be made about the way both prices and liquidity are likely to change as kick-off approaches. And Harry was presenting some tentative ideas, and pointing out some relevant research, that might enable us to get a handle on some of these issues.

Anyway, one of the pieces of work Harry referred to is a paper by F. Thomas Bruss, which includes the following type of example. You play a game where you can throw a dice (say) 10 times. Your objective is to throw a 6, at which point you can nominate that as your score, or continue.  But, here’s the catch: you only win if you throw a 6 and it’s the  final 6 in the sequence of 10 throws.

So, suppose you throw a 6 on the 3rd roll; should you stop? How about the 7th roll? Or the 9th? You can maybe see the connection with the trading issue: both problems require us to choose whether to stop or continue, based on an evaluation of the risk of what will subsequently occur.

Fast-forward a few days after Harry’s talk and I was reading Alex Bellos’s column in the Guardian. Alex is a journalist who writes about both football and mathematics (and sometimes both at the same time). His bi-weekly contributions to the Guardian take the form of mathematically-based puzzles. These puzzles are quite varied, covering everything from logic to geometry to arithmetic and so on. And sometimes even Statistics. Anyway, the puzzle I was reading after Harry’s talk is here. If you have time, take a read. Otherwise, here’s a brief summary.

It’s a basic version of Love Island. You have to choose from 3 potential love partners, but you only see them individually and sequentially. You are shown the first potential partner, and can decide to keep them or not. If you keep them, everything stops there. Otherwise you are shown the second potential partner. Again, you have to stick or twist: you can keep them, or you reject and are shown the third possibility. And in that case you are obliged to stick with that option.

In summary: once you stick with someone, that’s the end of the game. But if you reject someone, you can’t go back to them later. The question is: what strategy should you adopt in order to maximise the chances of choosing the person that you would have picked if you had seen all 3 at the same time?

Maybe have a think about this before reading on.

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

As well as giving a clearer description of the problem, Alex’s article also contains a link to his discussion of the solution. But what’s interesting is that it’s another example of an optimal stopping problem: once we’ve seen a new potential partner, and also previous potential partners, we have to make a decision on whether to stop with what we currently have, or risk trying to get an improvement in the future, knowing that we could also end up with something/someone worse. And if we can solve the problem for love partners, we are one step towards solving the problem for traders as well.


The Love Island problem discussed by Alex is actually a special case of The Secretary Problem.  A company needs to hire a secretary and does so by individual interviews. Once they’ve conducted an interview they have to hire or reject that candidate, without the possibility of returning to him/her once rejected. What strategy should they adopt in order to try to get the best candidate? In the Love Island version, there are just 3 candidates; in the more general problem, there can be any number. With 3 choices, and a little bit of patience, you can probably find the solution yourself (or follow the links towards Alex’s discussion of the solution). But how about if you had 1000 possible love partners? (Disclaimer: you don’t).

Actually, there is a remarkably simple solution to this problem whatever the number of options to choose from: whether it’s 3, 1000, 10,000,000 or whatever. Let this number of candidates be N. Then reject all candidates up to the M’th for some value of M, but keep note of the best candidate, C say, from those M options. Then accept the first subsequent candidate who is better than C in subsequent interviews (or the last candidate if none happens to be better).

But how to choose M? Well, even more remarkably, it turns out that if N is reasonably large, the best choice for M is around N/e, where e \approx 2.718 is a number that crops up a lot in mathematics. For N=1000 candidates, this means rejecting the first 368 and then choosing the first that is better than the best of those. And one more remarkable thing about this result: the probability that the candidate selected this way is actually the best out of all the available candidates is 1/e, or approximately 37%, regardless of the value of N. 

With N=3, the value of N is too small for this approximate calculation of M to be accurate, but if you calculated the solution to the problem – or looked at Alex’s – you’ll see that the solution is precisely of this form, with M=2 and a probability of 50% of picking the best candidate overall.

Anyway, what I really like about all this is the way things that are apparently unconnected – Love Island, choosing secretaries, trading strategies – are fundamentally linked once you formulate things in statistical terms. And even if the solution in one of the areas is too simple to be immediately transferable to another, it might at least provide useful direction. 

The bean machine

Take a look at the following video…

It shows the operation of a mechanical device that is variously known as a bean machine, a quincunx or a Galton board. When the machine is flipped, a large number of small balls or beans fall through a funnel at the top of the device. Below the funnel is a layered grid of pegs. As each bean hits a peg it can fall left or right – with equal probability if the board is carefully made – down to the next layer, where it hits another peg and can again go left or right. This repeats for a number of layers, and the beans are then collected in groups, according to the position they fall in the final layer. At the end you get a kind of physical histogram, where the height of the column of beans corresponds to the frequency with which the beans have fallen in that slot.

Remarkably, every time this experiment is repeated, the pattern of beans at the bottom is pretty much the same: it’s symmetric, high in the middle, low at the edges and has a kind of general bell-shape. In fact, the shape of this histogram will be a good approximation to the well-known normal distribution curve:

As you probably know, it turns out that the relative frequencies of many naturally occurring phenomena look exactly like this normal curve: heights of plants, people’s IQ, brightness of stars…. and indeed (with some slight imperfections) the differences in team points in sports like basketball.

Anyway, if you look at the bottom of the bean machine at the end of the video, you’ll see that the heights of the columns of beans – which in itself represents the frequency of beans falling in each position – resembles this same bell-shaped curve. And this will happen – with different small irregularities – every time the bean machine is re-started.

Obviously, just replaying the video will always lead to identical results, so you’ll have to take my word for it that the results are similar every time the machine is operated. There are some simulators available, but my feeling is you lose something by not seeing the actual physics of real-world beans falling into place. Take a look here if you’re interested, though I suggest you crank the size and speed buttons up to their maximum values first.

But why should it be that the bean machine, like many naturally occurring phenomena, leads to frequencies that closely match the normal curve?

Well, the final position of each bean is the result of several random steps in which the bean could go left or right. If we count +1 every time the bean goes right and -1 every time the bean goes left, then the final position is the sum of these random +/-1 outcomes. And it turns out, that under fairly general conditions, that whenever you have a process that is the sum of several random experiments, the final distribution is bound to look like this bell-shaped normal curve.

This is a remarkable phenomenon. The trajectory of any individual bean is unpredictable. It could go way to the left, or way to the right, though it’s more likely that it will stay fairly central. Anything is possible, though some outcomes are more likely than others. However, while the trajectory of individual beans is unpredictable, the collective behaviour of several thousand beans is entirely predictable to a very high degree of accuracy: the frequencies within any individual range will match very closely the values predicted by the normal distribution curve. This is really what makes statistics tick. We can predict very well how a population will behave, even if we can’t predict how individuals will behave.

Even more remarkably, if the bean machine has enough layers of pegs, the eventual physical histogram of beans will still look like the normal distribution curve, even if the machine has some sort of bias. For example, suppose the beans were released, but that the machine wasn’t quite vertical, so that the beans had a higher tendency to go left, rather than right, when they hit a peg. In this case, as long as there were sufficiently many layers of pegs, the final spread of beans would still resemble the normal curve, albeit no longer centred at the middle of the board. You can try this in the simulator by moving the left/right button away from 50%.

Technically, the bean machine is a physical illustration of a mathematical result generally termed the Central Limit Theorem. This states that in situations like those illustrated by the bean machine, where a phenomenon can be regarded as a sum of random experiments, then under general conditions the distribution of final results will look very much like the well-known bell-shaped normal curve.

It’s difficult to overstate the importance of this result – which is fundamental to almost all areas of statistical theory and practice – since it lets us handle probabilities in populations, even when we don’t know how individuals behave. And the beauty of the bean machine is that it demonstrates that the Central Limit Theorem is meaningful in the real physical world, and not just a mathematical artefact.


Can’t live without your own desktop bean machine? I have good news for you…

 

Pulp Fiction (Our Esteemed Leader’s cut)

The previous post had a cinematic theme. That got me remembering an offsite a while back where Matthew.Benham@smartbapps.co.uk gave a talk that I think he called ‘Do the Right Thing’, which is the title of a 1989 Spike Lee film. Midway through his talk Matthew gave a premiere screening of his own version of a scene from Pulp Fiction. Unfortunately, I’ve been unable to get hold of a copy of Matthew’s cut, so we’ll just have to make do with the inferior original….

The theme of Matthew’s talk was the importance of always acting in relation to best knowledge, even if it contradicts previous actions taken when different information was available. So, given the knowledge and information you had at the start of a game, you might have bet on team A. But if the game evolves in such a way that a bet on team B becomes positive value, you should do that. Always do the right thing. And the point of the scene from Pulp Fiction? Don’t let pride get in the way of that principle.  

These issues will make a great topic for this blog sometime. But this post is about something else…

Dependence is a big issue in Statistics, and we’re likely to return to it in different ways in future posts. Loosely speaking, two events are said to be independent if knowing the outcome of one, doesn’t affect the probabilities of the outcomes of the other. For example, it’s usually reasonable to treat the outcomes of two different football matches taking place on the same day as independent. If we know one match finished 3-0, that information is unlikely to affect any judgements we might have about the possible outcomes of a later match. Events that are not independent are said to be dependent: in this case, knowing the outcome of one will affect the outcome of the other.  In tennis matches, for example, the outcome of one set tends to affect the chances of who will win a subsequent set, so set winners are dependent events. 

With this in mind, let’s follow-up the discussion in the previous 2 posts (here and here) about accumulator bets. By multiplying prices from separate bets together, bookmakers are assuming that the events are independent. But if there were dependence between the events, it’s possible that an accumulator offers a value bet, even if the individual bets are of negative value. This might be part of the reason why Mark Kermode has been successful in several accumulator bets over the years (or would have been if he’d taken his predictions to the bookmaker and actually placed an accumulator bet).

Let me illustrate this with some entirely made-up numbers. Let’s suppose ‘Pulp Fiction (Our Esteemed Leader’s cut)’, is up for a best movie award, and its upstart director, Matthew Benham, has also been nominated for best director. The numbers for single bets on PF and MB are given in the following table. We’ll suppose the bookmakers are accurate in their evaluation of the probabilities, and that they guarantee themselves an expected profit by offering prices that are below the fair prices (see the earlier post). 

  True Probability Fair Price Bookmaker Price
Best Movie: PF 0.4 2.5 2
Best Director: MB 0.25 4 3.5

 

Because the available prices are lower than the fair prices and the probabilities are correct, both individual bets have negative value (-0.2 and -0.125 respectively for a unit stake). The overall price for a PF/MB accumulator bet is 7, which assuming independence is an even poorer value bet, since the expected winnings from a unit stake are

0.4 \times 0.25 \times 7 -1 = -0.3

However, suppose voters for the awards tend to have similar preferences across categories, so that if they like a particular movie, there’s an increased chance they’ll also like the director of that movie. In that case, although the table above might be correct, the probability of MB winning the director award if PF (MB cut) is the movie winner is likely to be greater than 0.25. For argument’s sake, let’s suppose it’s 0.5. Then, the expected winnings from a unit stake accumulator bet become

0.4 \times 0.5 \times 7 -1 = 0.4

That’s to say, although the individual bets are still both negative value, the accumulator bet is extremely good value. This situation arises because of the implicit assumption of independence in the calculation of accumulator prices. When the assumption is wrong, the true expected winnings will be different from those implied by the bookmaker prices, potentially generating a positive value bet.

Obviously with most accumulator bets – like multiple football results – independence is more realistic, and this discussion is unhelpful. But for speciality bets like the Oscars, or perhaps some political bets where late swings in votes are likely to affect more that one region, there may be considerable value in accumulator bets if available.


If anyone has a copy of Our Esteemed Leader’s cut of the Pulp Fiction scene on a pen-drive somewhere, and would kindly pass it to me, I will happily update this post to include it. 

How to not win £194,375

oscar
 
In the previous post we looked at why bookmakers like punters to make accumulator bets: so long as a gambler is not smart enough to be able to make positive value bets, the bookmaker will make bigger expected profits from accumulator bets than from single bets. Moreover, even for smart bettors, if any of their individual bets are not smart, accumulator bets may also favour the bookmaker. 
 
With all this in mind, here’s a true story…
 
Mark Kermode is a well-known film critic, who often appears on BBC TV and radio. In the early 90’s he had a regular slot on Danny Baker’s Radio 5 show, discussing recent movie releases etc. On one particular show early in 1992, chatting to Danny, he said he had a pretty good idea of how most of the important Oscars would be awarded that year. This was actually before the nominations had been made, so bookmaker prices on award winners would have been pretty good and since Radio 5 was a predominantly sports radio station, Danny suggested Mark make a bet on the basis of his predictions.
 
Fast-forward a few months to the day after the Oscar awards and Danny asked Mark how his predictions had worked out. Mark explained that he’d bet on five of the major Oscar awards and they’d all won. Danny asked Mark how much he’d won and he replied that he’d won around £120 for a £25 stake.  Considering the difficulty in predicting five correct winners, especially before nominations had been made, this didn’t seem like much of a return, and Danny Baker was incredulous. He’d naturally assumed that Mark would have placed an accumulator bet with the total stake of £25, whereas what Mark had actually done was place individual bets of £5 on each of the awards. 
 
Now, I’ve no idea what the actual prices were, but since the bets were placed before the nominations were announced, it’s reasonable to assume that the prices were quite generous. For argument’s sake, let’s suppose the bets on each of the individual awards  had a price of 6. Mark then placed a £5 bet on each, so he’d have made a profit of £25 per bet, for an overall profit of £125. Now suppose, instead, he’d made a single accumulator bet on all 5 awards. In this case he’d have made a profit of
 
\pounds 25 \times 6 \times 6 \times 6 \times 6 \times 6 -\pounds 25 = \pounds 194,375
 
Again, I’ve no idea if these numbers are accurate or not, but you get the picture. Had Mark made the accumulator bet that Danny intended, he’d have made a pretty big profit. As it was, he won enough for a night out with a couple of mates at the cinema, albeit with popcorn included. 
 
Of course, the risk you take with an accumulator is that it just takes one bet to fail and you lose everything. By placing 5 single bets Mark would still have won £95 if one of his predictions had been wrong, and would even make a fiver if he got just one prediction correct. But by not accumulating his bets, he also avoided the possibility of winning £194,375 if all 5 bets came in. Which they did! 
 
So, what’s the story here? Though an accumulator is a poor value bet for mug gamblers, it may be an extremely valuable bet for sharp gamblers, and the evidence suggests (see below) that Mark Kermode is sharper than the bookmakers for Oscar predictions.
 

 
Is Mark Kermode really sharper than the bookmakers for Oscar predictions? Well, here’s a list  of his predictions for the main 6 (not 5) categories for the years 2006-2017. Mark predicted all 6 categories with 100% accuracy twice in twelve years. I guess that these predictions weren’t always made before the nominations, so the prices are unlikely to be as good as in the example described above. But still, the price on a 6-fold accumulator will have been pretty good regardless. And he’d have won twice, in addition to the 1992 episode (and possibly more often in the intervening years for which I don’t have data). Remarkably, he would have won again in 2017 if the award for best movie had gone to La La Land, as was originally declared winner, rather than Moonlight, which was the eventual winner. 
 
Moral: try to find out Mark’s predictions for the 2019 Oscars and don’t make the mistake of betting singles!
 
And finally, here’s Mark telling the story of not winning something like£194,375 in his own words: