Problem solved

A while back I set a puzzle asking you to try to remove three coins from a red square region as shown in the following diagram.

The only rule of the game is that when a coin is removed it is replaced with 2 coins – 1 immediately to the right of and one immediately below the coin that is removed. If there is no space for adding these replacement coins, the coin cannot be removed.

The puzzle actually appeared in a recent edition of Alex Bellos’ Guardian mathematics puzzles, though it was created by the Argentinian mathematician Carlos Sarraute. This is his solution which is astonishing for its breathtaking ingenuity.

The solution starts by giving a value to every square in the grid as follows:

Remember, the grid goes on forever both to the right and downwards. The top left hand box has value 1. Going right from there, every subsequent square has value equal to 1/2 of the previous one. So: 1, 1/2, 1/4, 1/8 and so on. The first column is identical to the first row. To complete the second row, we start with the first value, 1/2, and again just keep multiplying by 1/2. The second column is the same as the second row. And we fill the entire grid this same way. Technically, every row and column is a series of geometric numbers: consecutive multiples of a common number, which in this case is 1/2.

Let’s define the value of a coin to be the value of the square its on. Then the total value of the coins at the start of the game is

1 + \frac{1}{2} + \frac{1}{2}= 2

Now…

  • When we remove a coin we replace it with two coins, one immediately to the left and one immediately to the right. But if you look at the value any square on the grid, it is equal to the sum of the values of the squares immediately below and to the right. So when we remove a coin we replace it with two coins whose total value is the same. It follows that the total value of the coins stays unchanged however many moves we make. It will always be 2 however many moves we make.
  • This is the only tricky mathematical part. Look at the first row of numbers. It consists of 1, 1/2, 1/4, 1/8… and goes on forever. But even though this is an infinite sequence it has a finite sum of 2. Obviously, we can never really add infinitely many numbers in practice, but by adding more and more terms in the series we will get closer and closer to the value of 2. Try it on a calculator. In summary:

1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} +\ldots  = 2.

  • Working down the rows, the second row is the same as the first with the first term removed. So its sum must be 1. The third is the same as the second with the first term of 1/2 removed, so its sum is 1/2. By the same reasoning, the sum of the fourth row will be 1/4, the fifth row 1/8 and so on.
  • So, the row sums are respectively 2, 1, 1/2, 1/4, …. This is the same as the values of the first row with the additional first term of 2. It follows that the sum of the row sums, and therefore the sum of all numbers in the grid is 2+2=4. Again, we can’t add all the numbers in the practice, but we will get closer and closer to the value of 4 by adding more and more squares.
  • The total value of the squares inside the red square is 1 + 1/2 + 1/2 + 1/4 = 9/4. The total value outside this region must therefore be 2-9/4= 7/4.
  • Putting all this together, the initial value of the coins was 2. After any number of moves, the total value of all coins will always remain 2. But the total value of all squares outside the red square is only 7/4. It must therefore be impossible to remove the three coins from the red square because to do so would require the coins outside of this area to have a value of 2, which is greater than the total value available in the entire region.

I find this argument quite brilliant. My instincts were to try to solve the puzzle using arguments from geometry. I failed. It would never have occurred to me to try to develop a solution based on the properties of numbers.


As I wrote in the original post, this puzzle doesn’t really have any direct relevant to Statistics except in so much as it shows the power and beauty of mathematical proof, which is an essential part of statistical theory. Having said that, the idea of infinite limits is important in Statistics, and I’ll discuss this in a further post. Let me just mention though that summing infinite series as in the solution above is a delicate issue for at least two reasons:

  1. Although the sum 1 + 1/2 + 1/4 + 1/8 + …. has a finite sum of 2, this series 1 + 1/2 + 1/3 + 1/4 + 1/5 + …. has no finite sum. The sum grows very slowly, but as I take more and more numbers in the series, the sum grows without any limit. That’s to say, if you give me any number – say 1 million – I can always find enough terms in the series for the sum to be greater than that number.
  2. To get the total value of the grid, we first added the rows and then added these row sums across the columns. We could alternatively have first added the columns, and then added these columns sums across the rows and we’d have got the same answer. For this example both these alternatives are valid. But in general this interchange of row and column sums to get the total sum is not valid. Consider, for example, this infinite grid:

 The first row sums to 2, after which all other rows sum to zero. So, the sum of the row sums is 2. But looking at the columns, even column sums to zero. So, if we sum the columns and then sum these sums we get 0. This couldn’t possibly happen with finite grids, but infinite grids require a lot more care.

In a follow-up post we’ll consider limits of sums in the context of Statistics.


Finally, I’m grateful to Fabian.Thut@smartodds.co.uk for some follow-up discussion on the original post. In particular, we ended up discussing the following variation on the original puzzles. The rules are exactly the same as before, but the starting configuration of the coins is now as per the following diagram:

In this case, can the puzzle be solved? Does the argument presented for the original problem help in any way?

If you have any thoughts about this, please do write to me. In any case, I’ll write another post with the solution to this version shortly.

I’m a 20 cent coin, get me out of here

 

European 20 Cent Coins (front and back) isolated on white background

Usually when I’ve posted puzzles to the blog the’ve had a basis in Probability or Statistics. One exception to this was the mutilated chessboard puzzle, whose inclusion I justified  by pointing out that mathematical logic is an important strand in the theory that underpins Statistics. Definitely not the only strand, but important nonetheless.

In this same spirit, here’s another puzzle you might like to look at. I’ll give references to the author and so on when I write a follow-up post with the solution. But, if you think I should only be doing posts that are strictly Probability or Statistics related, please just ignore this post. It’s only related to Statistics in the same way that the mutilated chessboard puzzle was. Having said that, I will use follow-up discussion to this puzzle as a lead-in to some important statistical ideas.

Anyway, here’s the puzzle. Look at this grid of squares…

Actually, you have to imagine the grid extending indefinitely downwards and to the right. In the top left-hand corner of the grid you can see a 2-by-2 section of the grid that’s been marked with red lines, and 3 coins have been placed in that section. Your task is to remove the coins from that section by following these rules:

  1. Coins are removed one at a time.
  2. When you remove a coin you must replace it with two coins, one immediately below and one immediately to the right of the one that’s been removed.
  3. If a coin does not have a free space both immediately below and to the right, it cannot be removed until such space becomes available.

You have to find the sequence of moves that results in the section inside the red square being emptied of coins, or explain why it’s impossible to find such a sequence.

To make things easier, you can try the puzzle using this gif. Again, I’ll give credits and references for this work when I write the follow-up post.

To play just click on the green flag to start and then click successively on the coins you’d like to remove. You will only be allowed to remove coins according to the rules above, and when you do legally remove a coin, two new coins are automatically added, again according to the stated rules. If you just want to start over, press the green flag again.

(I don’t know what the red button is for, but DON’T PRESS IT).

So, can you find the sequence of moves that releases all of the coins from the red square? Or if it can’t be done can you explain why?

Please write to me if you want to discuss the puzzle or share your ideas. I’ll write a post with a solution shortly.

 

 

The wonderful and frightening world of Probability

A while back I posted a puzzle based on a suggestion by Fabian.Thut@smartodds.co.uk in which Smartodds employees are – hypothetically – given the opportunity to increase their bonus by a factor of 10. See the original post for the rules of the game.

As I wrote at the time, the solution is not at all obvious – I don’t think I could have found it myself – but it includes some important ideas from Statistics. It goes as follows…

Each individual employee has a probability equal to 1/2 of finding their number. This is because they can open 50 of the 100 boxes, leaving another 50 unopened. It’s then obvious by symmetry that they must have a 50-50 chance of finding their number, since all numbers are randomly distributed among the boxes.

But recall, the players’ bonus is multiplied by 10 only if all players find their own number.

To begin with, let’s assume that the employees play the game without any strategy at all. In that case they are playing the game independently, and the standard rules of probability mean that we must multiply the individual probabilities to get the overall win probability. So, the probability that the first 2 players both win is 1/2 * 1/2. The probability that the first 3 players all win is 1/2 * 1/2 * 1/2. And the probability that all 100 players win is 1/2 multiplied by itself 100 times, which is roughly

0.000000000000000000000000000000789.

In other words, practically zero. So, the chances of the bonuses being multiplied by 10 is vanishingly small; it’s therefore almost certain that everyone will lose their bonus if they choose to play the game. As Ian.Rutherford@Smartbapps.co.uk wrote to me, it would be ‘one of the worst bets of all time’. No arguments there.

But the amazing thing is that with a planned strategy the probability that all players find their number, and therefore win the bet, can be increased to around 31%. The strategy which yields this probability goes like this…

Recall that the boxes themselves are numbered. Each player starts by opening the box corresponding to their own number. So, Player 1 opens box 1. If it contains their number they’ve won and they stop. Otherwise, whichever number they find in that box is chosen as the number of the box they will next look in. And they carry on this way, till either they find their number and stop; or they open 50 boxes without having fond their number. In the first case, that individual player has won, and it is the next player’s turn to enter the room (and play according to the same strategy); in the second case, they – and by the rules of the game the whole set of players – has lost.

So, Player 1 first opens box 1. If that box contains, say, the number 22, they next open box 22. If that contains the number 87, they next open box number 87. And so on, until they find their number or they reach the limit of 50 boxes. Similarly, Player 2 starts with box 2, which might contain the number 90; they next open box number 90; if that contains number 49 they then open box number 49, etc etc. Remarkably with this strategy the players will all find their own numbers – within the limit of 50 boxes each – with a probability of around 31%.

I find this amazing for 2 reasons:

  1. That fairly basic probability techniques can be used to show the strategy leads to calculate the win probability of around 31%;
  2. That the strategy results in such a massive increase in the win probability from virtually 0 to almost 1/3.

Unfortunately, though the calculations are all elementary, the argument is a touch too elaborate for me to reasonably include here. The Wikipedia entry for the puzzle – albeit with the cosmetic change of Smartodds employees being replaced by prisoners- does give the solution though, and it’s extremely elegant. If you feel like stretching your maths and probability skills just a little, it’s well worth a read.

In any case it’s instructive to look at a simpler case included in the Wikipedia article. Things are simplified there to just 8 employee/prisoners who have to find their number by opening at most 4 boxes (i.e. 50% of 8 boxes as opposed to 50% of 100 boxes in the original problem).

In this case suppose the numbers have been randomised into the boxes according to the following table…

Box number 1 2 3 4 5 6 7 8
Card number 7 4 6 8 1 3 5 2

Now suppose the players play according to the strategy described above except that they keep playing until they find their number, without stopping at the 4th box, even though they will have lost if they open more than 4 boxes. With the numbers randomised as above you can easily check that the sequence of boxes each player opens is as follows:

Player 1: (1, 7, 5)

player 2: (2, 4, 8)

Player 3: (3, 6)

Player 4: (4, 8, 2)

Player 5: (5, 1, 7)

Player 6: (6, 3)

Player 7: (7, 5, 1)

Player 8: (8, 2, 4)

With these numbers, since each player opens at most 3 boxes, everyone wins and the employers/prisoners get their increased bonus.

However, had the cards been randomised slightly differently among the boxes, as per the following table…

Box numbER 1 2 3 4 5 6 7 8
Card number 7 4 6 8 2 3 5 1

… then Player 1 (for example) would have followed the sequence (1, 7, 5, 2, 4, 8) and would therefore have lost, having opened more than 4 boxes.

Now observe:

  1. Several players follow the same sequence, albeit in a different order. For example, with the first randomisation, Players 1, 5 and 7 are each following the sequence (1, 7, 5) in some order;
  2. The complete set of different sequences – ignoring changes of order – are in the first case (1, 7, 5), (2, 4, 8) and (3, 6). They are bound not to overlap and are also bound to contain the complete set of numbers 1-8 between them.
  3. The fact that none of the sequences is longer than 4 with this randomisation means that the game has been won by all players. With the second randomisation, the fact that at least one of the sequences – (1, 7, 5, 2, 4, 8) – is longer than 4 means that the game has been lost.
  4. It follows that the probability the game is won is equal to the probability that the longest sequence has length at most 4.
  5. This argument applies more generally, so that with 100 players the game is won if the longest sequence of boxes opened has length at most 50.

Remarkably, it turns out to be not too difficult to calculate this probability that the longest sequence has length at most 50% of the number of players. And with 100 players it turns out be approximately 31%. And as if that’s not remarkable enough, the same proof shows that with an unlimited number of players the above strategy leads to a win probability of around 30.7%. In other words, in replacing 100 players with 1 million players, the win probability only drops from around 31% to 30.7%.

All quite incredible. But even without studying the detailed proof you can maybe get an idea from the 8-player example of why the strategy works. By playing this way, even though each individual player wins with probability 1/2, they no longer win or lose independently of one another. If Player 1 wins, every other player in their sequence of searches – Players 5 and 7 in the first example above – also wins. So, the suggested strategy induces dependence in the win/lose events of the individual players, and this leads to a change in win probability from something close to 0 to something close to 1/3.

Something similar actually came up earlier in the blog in the context of accumulator bets. I mentioned that betting on Mark Kermode’s Oscar predictions might be a good accumulator bet since the success of his predictions might not be independent events, and this had the potential to generate value against bookmakers who assume independence when setting prices for accumulators.

Finally, to answer the question: should the employees accept the challenge? If their original bonus is, say, £1000, then that becomes £10000 if they win, but £0 if they lose. So, with probability 31% they gain £9000, but with probability 69% they lose £1000. It follows that their expected gain if they play is

31\% \times \pounds 9000 - 69\% \times \pounds 1000 = \pounds 2100,

which is a comfortably positive expected profit for an outlay of £1000. So, they should definitely play, as long as they follow the strategy described above.


Two quick footnotes:

  1. It’s more difficult to prove, but it turns out that the strategy described above is optimal – there’s no other strategy that would lead to a bigger win probability than 31%;
  2. All of the above assumes that everyone follows the described strategy correctly. It would just take a couple of player to not follow the rules for all of the value of the bet to be lost. So, if the employees thought there might be a couple of, let’s say, ‘slow-learners’ in the company, it might be safer for them not to play and just take the £1000 and run.

Massively increase your bonus

In one of the earliest posts to the blog last year I set a puzzle where I suggested Smartodds were offering employees the chance of increasing their bonus, and you had to decide whether it was in their interests to accept the offer or not.

<They weren’t, and they still aren’t, but let’s play along>.

Same thing this year, but the rules are different. Eligible employees are invited to gamble their bonus at odds of 10-1 based on the outcome of a game. It works like this…

For argument’s sake, let’s suppose there are 100 employees that are entitled to a bonus. They are told they each have the opportunity to increase their bonus by a factor of 10 by playing the following game:

  • Each of the employees is randomly assigned a number between 1 and 100.
  • Inside a room there are 100 boxes, also labelled 1 to 100.
  • 100 cards, numbered individually from 1 to 100, have been randomly placed inside the boxes, so each numbered box contains a card with a unique random number from 1 to 100. For example, box number 1 might contain the card with number 62; box number 2 might contain the card with number 25; and so on.
  • Each employee must enter the room, one a a time, and can choose any 50 of the boxes to open. If they find the card with their own number in one of those boxes, they win. Otherwise they lose.
  • Though the employees may discuss the game and decide how they will play before they enter the room, they must not convey any information to the other employees after taking their turn.
  • The employees cannot rearrange any of the boxes or the cards – so everyone finds the room in the same state when they enter.
  • The employees will have their bonus multiplied by 10 if all 100 of them are winners. If there is a single loser, they all end up with zero bonus.

Should the employees accept this game, or should they refuse it and keep their original bonuses? And if they accept to play, should they adopt any particular strategy for playing the game?

Give it some thought and then scroll down for some discussion.

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

A good place to start is to calculate the probability that any one employee is a winner. This happens if one of the 50 boxes they open, out of the 100 available, contains the card with their number. Each box is equally likely to contain their number, so you can easily write down the probability that they win. Scroll down again for the answer to this part:

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

There are 100 boxes, and the employee selects 50 of them. Each box is equally likely to contain their number, so the probability they find their number in one of the boxes is 50/100 or 1/2.

So that’s the probability that any one employee wins. We now need to calculate the probability that they all win – bearing in mind the rules of the game – and then decide whether the bet is worth taking.

In summary:

  • There are 100 employees;
  • The probability that any one employee wins their game is 1/2;
  • If they all win, their bonuses will all be multiplied by 10;
  • If any one of them loses, they all get zero bonus.

Should the employees choose to play or to keep their original bonus? And if they play, is there any particular strategy they should adopt?

If you’d like to send me your answers I’d be really happy to hear from you. If you prefer just to send me a yes/no answer, perhaps just based on your own intuition, I’d be equally happy to get your response, and you can use this form to send the answer in that case.


This is a variant on a puzzle pointed out to me by Fabian.Thut@smartodds.co.uk. I think it’s a little more tricky than previous puzzles I’ve posted, but it illustrates a specific important statistical issue that I’ll discuss when giving the solution.

Intuition

Intuition is a great asset, but:

  1. Can sometimes lead you astray;
  2. Always needs to be balanced by doubt, to avoid becoming complacency;
  3. Is never a substitute for genuine understanding and knowledge.

A while back I posed the question

You’re at a party and meet someone. After chatting for a bit, you work out that the girl you’re talking to has 2 kids and one of them is a boy. What are the chances she’s got 2 boys rather than a boy and a girl?

And I discussed the solution to this problem in a subsequent post. As I discussed, although it’s tempting to give an answer of 1/2 – arguing that the other child is equally likely to be a boy or a girl – this reasoning is wrong, and the correct answer is 1/3.

I then followed up this discussion by extending the original problem as follows. Everything is the same, except you are told that the woman has 2 children, one of whom is a boy that was born on a Tuesday. With this information, what is the probability that the other child is also a boy?

Again, it looks like there ought to be an easy answer to this problem. And again, it turns out that this easy answer is wrong. But this time the breakdown of the simple intuitive argument is often surprising even to people who work regularly with probabilities.

The intuitive – but wrong – answer is that the probability is still 1/3. The argument goes that there is no relationship between gender and day of birth. Technically,  gender and day of birth are statistically independent. So, the information provided by the day of birth is completely irrelevant and can be ignored, meaning the probability that the other child is a boy remains at 1/3.

Except it doesn’t. The true answer is 13/27, which is just slightly less than 1/2.

The calculation leading to this answer is not difficult, but slightly more complicated than is reasonable for me to include in this post. If you’re interested, you can find a full explanation here.

But, as explained in that article, it’s interesting to state the result in slightly more generality.  We added the condition that the boy was born on a Tuesday, an event which has probability 1/7 for both boys or girls. Suppose we replace this event with something else like ‘The boy is left-handed’; or ‘The boy has green eyes’; or ‘The boy was born on Christmas Day’. And suppose the probability of a child – boy or girl – fulfilling this extra condition is p.

For the original Tuesday’s child problem, p=1/7. For the Christmas birthday p=1/365. For green eyes it’s whatever the proportion of people in the population with green eyes is. And so on.

Anyway, if we replace the condition of being born on a Tuesday with some other condition whose probability is p for either boys or girls, it turns out that the probability that both the children are boys, given the information provided, is

Q = \frac{2-p}{4-p}

If we substitute p=1/7 in this expression we get Q = 13/27, which explains the answer given above. But it’s not the value itself that’s important; it’s the fact that it’s different from 1/3. So, although gender and day of birth are independent, giving the extra information about day of birth shifts the original probability from 1/3 to 13/27.

And there’s something even more interesting. If the day of birth extra condition is replaced with a condition that’s less likely, so that p is smaller, then the value of Q gets closer and closer to 1/2. For example,  with the Christmas birthday condition, p=1/365 leading to Q = 0.4996573. In other words, if we include a very unusual condition for the child known to be a boy, the probability that the other child is also a boy gets very close to  1/2, which is the answer you get to the original problem by using the wrong intuition.

Not very intuitive, but it’s the truth.


Finally, here’s Dilbert’s take on intuition:

Proof reading

In an earlier post I described what’s generally known as the Mutilated Chessboard Puzzle. It goes like this: a chessboard has 2 diagonally opposite corners removed. The challenge is to cover the remaining 62 squares with 31 dominoes, each of which can cover 2 adjacent horizontal or vertical squares. Or, to show that such a coverage is impossible.

Several of you wrote to me about this, in many cases providing the correct solution. Thanks and congratulations to all of you.

The correct solution is that it is impossible to cover the remaining squares of the chessboard this way. But what’s interesting about this puzzle – to me at least – is what it illustrates about mathematical proof.

There would essentially be 2 ways to prove the impossibility of  a domino coverage. One way would be to enumerate every possible configuration of the 31 dominoes, and to show that none of these configurations covers the 62 remaining squares on the chessboard. But this takes a lot of time – there are many different ways of laying the dominoes on the chessboard.

The alternative approach is to ‘step back’ and try to reason logically why such a configuration is impossible. This approach won’t always work, but it’s often short and elegant when it does. And with the mutilated chessboard puzzle, it works beautifully…

When you place a domino on a chessboard it will cover 1 black and 1 red square (using the colours in the diagram above). So, 31 dominoes will cover 31 black and 31 red squares. But if you remove diagonally opposite corners from a chessboard, they will be of the same colour, so you’re left with either 32 black squares and 30 red, or vice versa. But you’re never left with 31 squares of each colour, which is the only pattern possible with 31 dominoes. So it’s impossible and the result is proved. Simply and beautifully.

As I mentioned in the previous post the scientific writer Cathy O’Neil cites having been shown this puzzle by her father at a young age as the trigger for her lifelong passion for mathematics. And maybe, even if you don’t have a passion for mathematics yourself, you can at least see why the elegance of this proof might trigger someone’s love for mathematics in the way it did for Cathy.

Having said all that, computer technology now makes proof by enumeration possible in situations where the number of configurations to check might be very large. But structured mathematical thinking is still often necessary to determine the parameters of the search. A good example of this is the well-known four colour theorem. This states that if you take any region that’s been divided into sub-regions – like a map divided into countries – then you only need four colours to shade the map in such a way that no adjacent regions have the same colour.

Here’a an example from the Wiki post:

You can see that, despite the complexity of the sub-regions, only 4 colours were needed to achieve a colouring in which no two adjacent regions have the same colour.

But how would you prove that any map of this type would require at most 4 colours? Ideally, as with the mutilated chessboard puzzle, you’d like a ‘stand back’ proof, based on pure logic. But so far no one has been able to find one. Equally, enumeration of all possible maps is clearly impossible – any region can be divided into subregions in infinitely many ways.

Yet a proof has been found which is a kind of hybrid of the ‘stand back’ and ‘enumeration’ approaches. First, a deep understanding of mathematical graphs was used to reduce the infinitely many possible regions to a finite number – actually, around 2000 – of maps to consider. That’s to say, it was shown that it’s not necessary to consider all possible regional mappings – if a 4-colour shading of a certain set of  2000ish different maps could be found, this would be enough to prove that such a shading existed for all possible maps. Then a computer algorithm was developed to search for a 4-colour shading for each of the identified 2000 or so maps.  Putting all of this together completed the proof that a 4-colour shading existed for any map, not just the ones included in the search.

Now, none of this is strictly Statistics, though Cathy O’Neil’s book that I referred to in the previous post is in the field of data science, which is at least a close neighbour of Statistics. But in any case, Statistics is built on a solid mathematical framework, and things that we’ve seen in previous posts like the Central Limit Theorem – the phenomenon by which the frequency distributions of many naturally occurring phenomena end up looking bell-shaped – are often based on the proof of a formal mathematical expression, which in some cases is as simple and elegant as that of the mutilated chessboard puzzle.


I’ll stop this thread here so as to avoid a puzzle overload, but I did want to mention that there is an extension of the Mutilated Chessboard Puzzle. Rather than removing 2 diagonally opposite corners, suppose I remove any 2 arbitrary squares, possibly adjacent, possibly not. In that case, can the remaining squares be covered by 31 dominoes?

If the 2 squares removed are of the same colour, the solution given above works equally well, so we know the problem can’t be solved in that case. But what if I remove one black and one red square? In that case, can the remaining squares be covered by the 31 dominoes:

  1. Always;
  2. Sometimes; or
  3. Never?

I already sent this problem to some of you who’d sent me a solution to the original problem. And I should give a special mention to Fabian.Thut@smartodds.co.uk  who provided a solution which is completely different to the standard textbook solution. Which illustrates another great thing about mathematics: there is often more than solution to the same problem. If you’d like to try this extension to the original problem, or discuss it with me, please drop me a line.

 

 

Family problems

In an earlier post, I set the following problem:

You’re at a party and meet someone. After chatting for a bit, you work out that the girl you’re talking to has 2 kids and one of them is a boy. What are the chances she’s got 2 boys rather than a boy and a girl?

Following some feedback, I later updated that post to clarify that the question isn’t intended to be about possible slight differences in the birth rates of boys and girls. That’s an interesting biological and demographic issue, but wasn’t intended as the point of the question. For the purposes of the question I simply meant to assume that all children, regardless of a mother’s previous history of births, is equally likely to be a boy or a girl.

In that case, it’s very tempting to answer the question above as 1/2. Indeed, this was the point of the post. One child’s a boy. The other is just as likely to be a boy or a girl, so the answer must be 1/2.

Except it’s not.

The answer is 1/3, and here’s the reasoning…

Without any additional information, if a woman has a 2-child family the possibilities (with the oldest child listed first) are:

Boy- Boy, Boy-Girl, Girl-Boy, Girl-Girl

and because all children are equally likely to be male or female, these combinations are all equally likely. But we can rule out the Girl-Girl combination from the information in the problem, so the remaining possibilities are

Boy-Boy, Boy-Girl, Girl-Boy

with each being equally likely. But if you consider a boy in one of these pairs, it’s only in the first case that the other child is a boy. So, in just 1 of the 3 equally likely outcomes is the other child also a boy, so the probability is 1/3.

This simple problem illustrates the difficulty in calculating what is called a conditional probability – the probability of something conditional on some given information, in this case that one of the children is a boy. Whereas in general the chance of a child being a boy is 1/2, once you include the extra information that we’re looking at a 2-child family in which at least one of the children is a boy, the probability changes. At first it seems counter-intuitive, but once you’ve seen a few problems of this type, your intuition becomes sharper.

With that in mind, let me pose the same problem as above, but suppose you find out that one of the woman’s kids is a boy that was born on a Tuesday. Now what’s the probability that the other child is also a boy?

As usual, I’ll write a future post with the solution and discussion. But if you’d like to send me your own ideas, either by mail or via this survey form, I’d be really happy to hear from you.


Thanks to those of you who replied to the original question. Apart from some initial confusion about whether I was suggesting boys might be more or less common than girls in general, there was roughly a 50-50 split in answers between 1/2 and 1/3. As explained above, it’s easy to be misled into thinking the answer is 1/2, so there is no embarrassment in arriving at that answer. And like I say, that was the whole point of the post. Nonetheless, well done to those of you who got the correct answer of 1/3.

It’ll be interesting to see what replies I get to the revised problem above, so please do send me your answer or thoughts on the problem if you have time.

By coincidence

dna2

In an earlier post I suggested we play a game. You’d pick a sequence of three outcomes of a coin toss, like THT. Then I’d pick a different triplet, say TTT. I’d then toss a coin repeatedly and whoever’s chosen triplet showed up first in the sequence would be the winner.

In the post I gave the following example…

H T H H H T T H T H …..

… and with that outcome and the choices above you’d have won since your selection of THT shows up starting on the 7th coin toss, without my selection of TTT showing up before.

The question I asked was who this game favoured. Assuming we both play as well as possible, does it favour

  1. neither of us, because we both get to choose and the game is symmetric? Or;
  2. you, because you get to choose first and have the opportunity to do so optimally? Or;
  3. me, because I get to see your choice before I have to make mine?

The answer turns out to be 3. I have a big advantage over you, if I play smart. We’ll discuss what that means in a moment.

But in terms of these possible answers, it couldn’t have been 2. Whatever you choose I could have chosen the exact opposite and by symmetry, since H and T are equally likely, our two triplets would have been equally likely to occur first in the sequence. So, if you choose TTT, I choose HHH. If you choose HHT, I choose TTH and so on. In this way I don’t have an advantage over you, but neither do you over me. So we can rule out 2 as the possible answer.

But I can actually play better than that and have an advantage over you, whatever choice you make. I play as follows:

  1. My first choice in the sequence is the opposite of your second choice.
  2. My second and third choices are equal to your first and second choices.

So, if you chose TTT, I would choose HTT. If you chose THT, I would choose TTH. And so on. It’s not immediately obvious why this should give me an advantage, but it does. And it does so for every choice you can make.

The complete set of selections you can make, the selections I will make in response, and the corresponding odds in my favour are given in the following table.

Your Choice My Choice My win odds
HHH THH 7:1
HHT THH 3:1
HTH HHT 2:1
HTT HHT 2:1
THH TTH 2:1
THT TTH 2:1
TTH HTT 3:1
TTT HTT 7:1

As you can see, your best choice is to go for any of HTH, HTT, THH, THT, but even then the odds are 2:1 in my favour. That’s to say, I’m twice as likely to win as you in those circumstances. My odds increase to 3:1 – I’ll win three times as often as you – if you choose HHT or TTH; and my odds are a massive 7:1 – I’ll win seven times as often as you – if you choose HHH or TTT.

So, even if you play optimally, I’ll win twice as often as you. But why should this be so? The probabilities aren’t difficult to calculate, but most are a little more complicated than I can reasonably include here. Let’s take the easiest example though. Suppose you choose HHH, in which I case I choose THH. I then start tossing the coins. It’s possible that HHH will be the first 3 coins in the sequence. That will happen with probability 1/2 x 1/2 x 1/2 =1/8. But if that doesn’t happen, then there’s no way you can win. Because the first time HHH appears in the sequence it will have had to have been preceded by a T (otherwise HHH has occurred earlier). In which case my THH occurs before your HHH. So, you would have won with probability 1/8, and therefore I win with probability 7/8, and my odds of winning are 7:1.

Like I say, the other cases – except when you choose TTT, which is identical to HHH, modulo switching H’s and T’s – are a little more complicated, but the principle is essentially the same every time.

By coincidence, this game was invented by Walter Penney (Penney -> penny, geddit?), who published it in the Journal of Recreational Mathematics in 1969. It’s interesting from a mathematical/statistical/game-theory point of view because it’s an example of a non-transitive game. For example, looking at the table above, HHT is inferior to THH; THH is inferior to TTH; TTH is inferior to HTT; and HTT is inferior to HHT. Which brings us full circle. So, there’s no overall optimal selection. Each can be beaten by another, which in turn can be beaten by a different choice again. This is why the second player has an advantage: they can always find a selection that will beat the first player’s. It doesn’t matter that their choice can also be beaten, because the first player has already made their selection and it wasn’t that one.

The best known example of a non-transitive game is Rock-Paper-Scissors. Rock beats scissors; scissors beats paper; paper beats rock. But in that case it’s completely deterministic – rock will always beat paper, for example. In the Penney coin tossing game, HTT will usually beat TTT, but occasionally it won’t. So, it’s perhaps better defined as a random non-transitive game.

The game also has connections with genetics. Strands of DNA are long chains composed of sequences of individual molecules known as nucleotides. Only four different types of nucleotide are found in DNA strands, and these are usually labelled A, T, G and C. The precise ordering of these nucleotides in the DNA strand effectively define a code that will determine the characteristics of the individual having that DNA.

It’s not too much of a stretch of the imagination to see that a long sequence of nucleotides A, T, G and C is not so very different – albeit with 4 variants, rather than 2 – from a sequence of outcomes just like those from the tosses of a coin. Knowing which combinations of the nucleotides are more likely to occur than others, and other combinatorial aspects of their arrangements, proves to be an important contribution of Statistics to the understanding of genetics and the development of genetic intervention therapies. Admittedly, it’s not a direct application of Penney’s game, but the statistical objectives and techniques required are not so very different.


Thanks to those of you who wrote to me with solutions to this problem, all of which were at least partially correct.

Tennis puzzles

They’re not especially statistical, and not especially difficult, but I thought you might like to try some tennis-based puzzle questions. I’ve mentioned before that Alex Bellos has a fortnightly column in the Guardian where he presents mathematical puzzles of one sort or another. Well, to coincide with the opening of Wimbledon, today’s puzzles have a tennis-based theme. You can find them here.

I think they’re fairly straightforward, but in any case, Alex will be posting the solutions later today if you want to check your own answers.

I say they’re not especially statistical, but there is quite a lot of slightly intricate probability associated with tennis, since live tennis betting is a lucrative market these days. Deciding whether a bet is good value or not means taking the current score and an estimate of the players’ relative abilities, and converting that into a match win probability for either player, which can then be compared against the bookmakers’ odds. But how is that done? The calculations are reasonably elementary, but complicated by both the scoring system and the fact that players tend to be more likely to win a point on serve than return.

If you’re interested, the relevant calculations for all score situations are available in this academic paper, though this assumes players are equally strong on serve and return. It also assumes the outcome of each point is statistically independent from all other points – that’s to say, knowing the outcome of one point doesn’t affect the probability of who wins another point. So, to add to Alex’s 3 questions above, I might add:

Why might tennis points not be statistically independent in practice, and what is the likely effect on match probability calculations of assuming they are when they’re not?