I’m a 20 cent coin, get me out of here

 

European 20 Cent Coins (front and back) isolated on white background

Usually when I’ve posted puzzles to the blog the’ve had a basis in Probability or Statistics. One exception to this was the mutilated chessboard puzzle, whose inclusion I justified  by pointing out that mathematical logic is an important strand in the theory that underpins Statistics. Definitely not the only strand, but important nonetheless.

In this same spirit, here’s another puzzle you might like to look at. I’ll give references to the author and so on when I write a follow-up post with the solution. But, if you think I should only be doing posts that are strictly Probability or Statistics related, please just ignore this post. It’s only related to Statistics in the same way that the mutilated chessboard puzzle was. Having said that, I will use follow-up discussion to this puzzle as a lead-in to some important statistical ideas.

Anyway, here’s the puzzle. Look at this grid of squares…

Actually, you have to imagine the grid extending indefinitely downwards and to the right. In the top left-hand corner of the grid you can see a 2-by-2 section of the grid that’s been marked with red lines, and 3 coins have been placed in that section. Your task is to remove the coins from that section by following these rules:

  1. Coins are removed one at a time.
  2. When you remove a coin you must replace it with two coins, one immediately below and one immediately to the right of the one that’s been removed.
  3. If a coin does not have a free space both immediately below and to the right, it cannot be removed until such space becomes available.

You have to find the sequence of moves that results in the section inside the red square being emptied of coins, or explain why it’s impossible to find such a sequence.

To make things easier, you can try the puzzle using this gif. Again, I’ll give credits and references for this work when I write the follow-up post.

To play just click on the green flag to start and then click successively on the coins you’d like to remove. You will only be allowed to remove coins according to the rules above, and when you do legally remove a coin, two new coins are automatically added, again according to the stated rules. If you just want to start over, press the green flag again.

(I don’t know what the red button is for, but DON’T PRESS IT).

So, can you find the sequence of moves that releases all of the coins from the red square? Or if it can’t be done can you explain why?

Please write to me if you want to discuss the puzzle or share your ideas. I’ll write a post with a solution shortly.

 

 

Sleight of hand

cards3

A while ago I sent a post with a card trick which I explained had a statistical element to it, and asked you to try to work out how it was done. Thanks to those of you who wrote to me with variants on the correct answer.

The rules of the game were that my assistant, Matteo, chose a card at random hidden from me. It happened to be a 5 in the video. I then turned the cards over one at a time and Matteo had to play a counting game. Once he reached the 5th card, he noted its value, which was a 10. So he then counted another 10 cards in the sequence, noted the value of that card, and so on until we ran out of cards. Matteo had to remember the final card in his sequence before the cards ran out, which turned out to be the eight of diamonds. My task as the magician was to predict what Matteo’s final card was, which I did successfully.

Now, there are 2 reasons why this is a statistical card trick.

  1. It doesn’t always work. It does so with a reasonably high probability, but depending on the  configuration of the cards once they are shuffled, won’t always. I’ll be honest: we had to remake the video several times, but that was always due to my incompetence in explaining the trick and not because it ever failed. Still, it won’t always work.
  2. The second reason it’s a statistical trick is in its execution. The way it works is that I also play the same counting game as Matteo, but starting with the value of the first card I turn over, which happened to be a 10. So, we’re both playing the same counting game but from different starting points. Matteo’s starting point is 5, mine is 10. Although we start from different places, it turns out to be quite likely – though not certain – that the counting sequences we follow will overlap at some point. And once they do overlap, we are then following exactly the same sequence and so will arrive at the same final card.

Technically, the sequences of cards Matteo and I are both following are called Markov chains. These are sequences of random numbers such that in order to understand what the next card might be I only need to know the value of the current card, without knowing the past sequence that took me to the current state. In other words, when Matteo has to start counting 10 cards, it doesn’t matter how he got to that position, just that that’s where he currently is. And I also generate my own Markov chain. With an unlimited number of cards in a pack, the mathematical properties of Markov chains would guarantee that our sequences  meet at some point, after which we would be following exactly the same sequence, leading me to have the same final card as Matteo. With just 52 cards in a pack, there’s no guarantee, which is why the trick won’t always work.

The fact that the trick might not work is a little undesirable, but you can increase the chances by counting picture cards as 1 rather than 10. This forces the sequences to change card more often, which increases the chances of our two sequences overlapping.


Markov chains are actually really important building blocks for modelling in many areas of Statistics, which is one reason why I used posted the card trick to the blog. I’ll use a future post to explain this though.

 

Getting high

Speed climbing is what is says on the tin: climbing at speed. The objective is to climb a standard wall with a height of 15 metres as quickly as possible. Speed climbing is actually one of three disciplines – the others being ‘bouldering’ and ‘lead’ –  that together comprise Sport Climbing. This combined category will be included as an Olympic sport for the first time in Tokyo, 2020.

The history of Sport Climbing is relatively brief. It seems to have developed from Sportroccia, which was the first international competition for climbers held in different locations in Italy from 1985 to 1989. This led to the first World Championships in Frankfurt in 1991, since which there has been a Sport Climbing World Championship event held every two years.

The inclusion of speed climbing as one of the disciplines in Sport Climbing has always been controversial. Many climbers regard the techniques required to climb at speed to be at odds with the skills that are needed for genuine outdoor climbs. Like in the picture at the header of this post.

The controversy is such that even though Sport Climbing will be in the Olympics for the first time in 2020, a new format is being proposed for the 2024 Olympics in which Speed Climbing is separated as a discipline from the other two categories.

Anyway, leaving the controversy aside, climbing 15 metres doesn’t sound too daunting until you look at a picture of what it entails…

For experienced climbers a wall like this isn’t particularly challenging, but speed climbers have the additional task of competing against both an opponent – who is simultaneously completing an identical course – and the clock. The current world records are 5.48 seconds for men and 6.995 seconds for women. Just to put that in perspective: the men’s record corresponds to a speed of almost 10 km per hour. Vertically. With not much to hold onto.

The women’s world record was actually set very recently by the Indonesian female climber Aries Susanti Rahayu – nicknamed Spiderwoman. You can see her record breaking climb here.

The men’s world record is held by by Iranian climber Reza Alipourshenazandifar in 2017. (Performance here.)

Like my recent discussion about marathon times, what’s interesting about speed climbing from a statistical point of view is trying to assess what the fastest possible climb time might be.

The following graphs shows how the records have fallen over time for both men and women.

Though irregular, you could convince yourself that the pattern for women’s records is approximately following a straight line. On the other hand, notwithstanding the lack of data, the pattern for men seems more like a curve that could be levelling off. These two observations aren’t mutually consistent though, as they would suggest that not too far into the future the women’s record will be faster than the men’s, which is implausible – though not impossible – for biological reasons.

This illustrates a number of difficulties with statistical modelling in this type of context:

  1. We have very few data to work with;
  2. To predict forwards we need to assume some basic pattern for the data, but the choice of pattern – say linear or curved – is likely itself to affect how results extrapolate into the future;
  3. Separate extrapolations for women and men might lead to incompatible results;
  4. As also discussed in the context of predicting ultimate marathon times, an extrapolation based just on numbers ignores the underlying physics and biology which ultimately determines what the limits of human capacity are.

Maybe have a look at the data yourselves and write to me if you have ideas about what the ultimate times for both men and women might be. I’ll post any suggestions and perhaps even add ideas of my own in a future post.

The wonderful and frightening world of Probability

A while back I posted a puzzle based on a suggestion by Fabian.Thut@smartodds.co.uk in which Smartodds employees are – hypothetically – given the opportunity to increase their bonus by a factor of 10. See the original post for the rules of the game.

As I wrote at the time, the solution is not at all obvious – I don’t think I could have found it myself – but it includes some important ideas from Statistics. It goes as follows…

Each individual employee has a probability equal to 1/2 of finding their number. This is because they can open 50 of the 100 boxes, leaving another 50 unopened. It’s then obvious by symmetry that they must have a 50-50 chance of finding their number, since all numbers are randomly distributed among the boxes.

But recall, the players’ bonus is multiplied by 10 only if all players find their own number.

To begin with, let’s assume that the employees play the game without any strategy at all. In that case they are playing the game independently, and the standard rules of probability mean that we must multiply the individual probabilities to get the overall win probability. So, the probability that the first 2 players both win is 1/2 * 1/2. The probability that the first 3 players all win is 1/2 * 1/2 * 1/2. And the probability that all 100 players win is 1/2 multiplied by itself 100 times, which is roughly

0.000000000000000000000000000000789.

In other words, practically zero. So, the chances of the bonuses being multiplied by 10 is vanishingly small; it’s therefore almost certain that everyone will lose their bonus if they choose to play the game. As Ian.Rutherford@Smartbapps.co.uk wrote to me, it would be ‘one of the worst bets of all time’. No arguments there.

But the amazing thing is that with a planned strategy the probability that all players find their number, and therefore win the bet, can be increased to around 31%. The strategy which yields this probability goes like this…

Recall that the boxes themselves are numbered. Each player starts by opening the box corresponding to their own number. So, Player 1 opens box 1. If it contains their number they’ve won and they stop. Otherwise, whichever number they find in that box is chosen as the number of the box they will next look in. And they carry on this way, till either they find their number and stop; or they open 50 boxes without having fond their number. In the first case, that individual player has won, and it is the next player’s turn to enter the room (and play according to the same strategy); in the second case, they – and by the rules of the game the whole set of players – has lost.

So, Player 1 first opens box 1. If that box contains, say, the number 22, they next open box 22. If that contains the number 87, they next open box number 87. And so on, until they find their number or they reach the limit of 50 boxes. Similarly, Player 2 starts with box 2, which might contain the number 90; they next open box number 90; if that contains number 49 they then open box number 49, etc etc. Remarkably with this strategy the players will all find their own numbers – within the limit of 50 boxes each – with a probability of around 31%.

I find this amazing for 2 reasons:

  1. That fairly basic probability techniques can be used to show the strategy leads to calculate the win probability of around 31%;
  2. That the strategy results in such a massive increase in the win probability from virtually 0 to almost 1/3.

Unfortunately, though the calculations are all elementary, the argument is a touch too elaborate for me to reasonably include here. The Wikipedia entry for the puzzle – albeit with the cosmetic change of Smartodds employees being replaced by prisoners- does give the solution though, and it’s extremely elegant. If you feel like stretching your maths and probability skills just a little, it’s well worth a read.

In any case it’s instructive to look at a simpler case included in the Wikipedia article. Things are simplified there to just 8 employee/prisoners who have to find their number by opening at most 4 boxes (i.e. 50% of 8 boxes as opposed to 50% of 100 boxes in the original problem).

In this case suppose the numbers have been randomised into the boxes according to the following table…

Box number 1 2 3 4 5 6 7 8
Card number 7 4 6 8 1 3 5 2

Now suppose the players play according to the strategy described above except that they keep playing until they find their number, without stopping at the 4th box, even though they will have lost if they open more than 4 boxes. With the numbers randomised as above you can easily check that the sequence of boxes each player opens is as follows:

Player 1: (1, 7, 5)

player 2: (2, 4, 8)

Player 3: (3, 6)

Player 4: (4, 8, 2)

Player 5: (5, 1, 7)

Player 6: (6, 3)

Player 7: (7, 5, 1)

Player 8: (8, 2, 4)

With these numbers, since each player opens at most 3 boxes, everyone wins and the employers/prisoners get their increased bonus.

However, had the cards been randomised slightly differently among the boxes, as per the following table…

Box numbER 1 2 3 4 5 6 7 8
Card number 7 4 6 8 2 3 5 1

… then Player 1 (for example) would have followed the sequence (1, 7, 5, 2, 4, 8) and would therefore have lost, having opened more than 4 boxes.

Now observe:

  1. Several players follow the same sequence, albeit in a different order. For example, with the first randomisation, Players 1, 5 and 7 are each following the sequence (1, 7, 5) in some order;
  2. The complete set of different sequences – ignoring changes of order – are in the first case (1, 7, 5), (2, 4, 8) and (3, 6). They are bound not to overlap and are also bound to contain the complete set of numbers 1-8 between them.
  3. The fact that none of the sequences is longer than 4 with this randomisation means that the game has been won by all players. With the second randomisation, the fact that at least one of the sequences – (1, 7, 5, 2, 4, 8) – is longer than 4 means that the game has been lost.
  4. It follows that the probability the game is won is equal to the probability that the longest sequence has length at most 4.
  5. This argument applies more generally, so that with 100 players the game is won if the longest sequence of boxes opened has length at most 50.

Remarkably, it turns out to be not too difficult to calculate this probability that the longest sequence has length at most 50% of the number of players. And with 100 players it turns out be approximately 31%. And as if that’s not remarkable enough, the same proof shows that with an unlimited number of players the above strategy leads to a win probability of around 30.7%. In other words, in replacing 100 players with 1 million players, the win probability only drops from around 31% to 30.7%.

All quite incredible. But even without studying the detailed proof you can maybe get an idea from the 8-player example of why the strategy works. By playing this way, even though each individual player wins with probability 1/2, they no longer win or lose independently of one another. If Player 1 wins, every other player in their sequence of searches – Players 5 and 7 in the first example above – also wins. So, the suggested strategy induces dependence in the win/lose events of the individual players, and this leads to a change in win probability from something close to 0 to something close to 1/3.

Something similar actually came up earlier in the blog in the context of accumulator bets. I mentioned that betting on Mark Kermode’s Oscar predictions might be a good accumulator bet since the success of his predictions might not be independent events, and this had the potential to generate value against bookmakers who assume independence when setting prices for accumulators.

Finally, to answer the question: should the employees accept the challenge? If their original bonus is, say, £1000, then that becomes £10000 if they win, but £0 if they lose. So, with probability 31% they gain £9000, but with probability 69% they lose £1000. It follows that their expected gain if they play is

31\% \times \pounds 9000 - 69\% \times \pounds 1000 = \pounds 2100,

which is a comfortably positive expected profit for an outlay of £1000. So, they should definitely play, as long as they follow the strategy described above.


Two quick footnotes:

  1. It’s more difficult to prove, but it turns out that the strategy described above is optimal – there’s no other strategy that would lead to a bigger win probability than 31%;
  2. All of the above assumes that everyone follows the described strategy correctly. It would just take a couple of player to not follow the rules for all of the value of the bet to be lost. So, if the employees thought there might be a couple of, let’s say, ‘slow-learners’ in the company, it might be safer for them not to play and just take the £1000 and run.

Relatively speaking

Last week, when discussing Kipchoge’s recent sub 2-hour marathon run, I showed the following figure which compares histograms of marathon race times in a large database of male and female runners.

I mentioned then that I’d update the post to discuss the other unusual shape of the histograms. The point I intended to make concerns the irregularity of the graphs. In particular, there are many spikes, especially before the 3, 3.5 and 4 hour marks. Moreover, there is a very large drop in the histograms – most noticeably for men – after the 4 hour mark.

This type of behaviour is unusual in random processes:. frequency diagrams of this type, especially those  based on human characteristics, are generally much smoother. Naturally, with any sample data, some degree of irregularity in frequency data is inevitable, but:

  1. These graphs are based on a very large sample of more than 3 million runners, so random variations are likely to be very small;
  2. Though irregular in shape, the timings of the irregularities are themselves regular.

So, what’s going on?

The irregularities are actually a consequence of the psychology of marathon runners attempting to achieve personal targets. For example, many ‘average’ runners will set a race time target of 4 hours. Then, either through a programmed training regime or sheer force of will on the day of the race, will push themselves to achieve this race time. Most likely not by much, but enough to be on the left side of the 4-hour mark.

The net effect of many runners behaving similarly is to cause a surge of race times just before the 4-hour mark and a dip thereafter. There’s a similar effect at 3 and 3.5 hours – albeit of a slightly smaller magnitude – and smaller effects still at what seem to be around 10 minute intervals. So, the spikes in the histograms are due to runners consciously adapting their running pace to meet self-set objectives which are typically at regular times like 3, 3.5, 4 hours and so on.

Thanks to those of you that wrote to me to explain this effect.

Actually though, since writing the original post, something else occurred to me about this figure, which is why I decided to write this separate post instead of just updating the original one. Take a look at the right hand side of the plot – perhaps from a finish time of around 5 hours onwards. The values of the histograms are pretty much the same for men and women in this region. This contrasts sharply with the left side of the diagram where there are many more men than women finishing the race in, say, less than 3 hours. So, does this mean that although at faster race times there are many more men than women, at slow race times there are just as many women as men?

Well, yes and no. In absolute terms, yes: there are pretty much the same number of men as women completing the race with a time of around 6 hours. But… this ignores the fact that there are actually many more men than women overall – one of the other graphics on the page from which I copied the histograms states that the male:female split in the database is 61.8% to 31.2%. So, although the absolute numbers of men race times is similar to that of women, the proportion of runners that represents is considerably lower compared to women.

Arguably, comparing histograms gives a misleading representation of the data. It makes it look as though men and women are equally likely to have a race time of around 6 hours. Though true, this is only because many more men than women run the marathon.  The proportion of men completing the race with a time of around 6 hours is considerably smaller than that of women.

The same principle holds at all race times but is less of an issue when interpreting the graph. For example, the difference in proportions of men and women having a race time of around 4 hours is smaller than that of the actual frequencies in the histograms above, but it is still a big difference. It’s really where the absolute frequencies are similar that the picture above can be misleading.

In summary: there is a choice when drawing histograms of using absolute or relative frequencies. (Or counts and percentages). When looking at a single histogram it makes little difference – the shape of the histogram will be identical in both cases. When comparing two or more sets of results, histograms based on relative frequencies are generally easier to interpret. But in any case, when interpreting any statistical diagram, always look at the fine detail provided in the descriptions on the axes so as to be sure what you’re looking at.


Footnote:

Some general discussion and advice on drawing histograms can be found here.

It’s official: Brits get drunk more often than anywhere else in the WORLD

A while back the Global Drug Survey (GDS) produced its annual report. Here are some of the newspaper headlines following its publication:

It’s official: Brits get drunk more often than anywhere else in the WORLD. (The Mirror)

Britons get drunk more often than 35 other nations, survey finds. (The Guardian)

Brits are world’s biggest boozers and we get hammered once a week, study says. (The Sun)

And reading some of these articles in detail we find:

  • Of the 31 countries included in the study, Britons get drunk most regularly (51.1 times per year, on average).
  • Britain has the highest rate of cocaine usage (74% of participants in the survey say they have used it at some point).
  • 64% of English participants in the survey claim to have used cocaine in the last year.

Really? On average Brits are getting drunk once a week? And 64% of the population have used cocaine in the last year? 64%!

Prof Adam Winstock, founder of the survey, summarises things thus:

In the UK we don’t tend to do moderation, we end up getting drunk as the point of the evening.

At which point it’s important to take a step back and understand how the GDS works. If you want a snapshot of a population as a whole, you have to sample in such a way that every person in the population is equally likely to be sampled. Or at least ensure by some other mechanism that the sample is truly representative of the population. But the Global Drug Survey is different: it’s an online survey targeted at people whose demographics coincide with people who are more likely to be regular drinkers and/or drug users.

Consequently, it’s safe to conclude that the Brits who chose to take this survey are likely to get drunk more often than people from other countries who also completed the survey. And that 64% of British participants in the survey have used cocaine last year. But since this sample is neither random nor designed to be representative, it really tells us nothing about the population as a whole. And even comparisons of the respondents across countries should be treated cautiously: perhaps the differences are not due to variations in drink/drug usage but instead due to variations in the composition of the survey respondents across countries.

Here’s what the GDS say themselves about this…

Don’t look to GDS for national estimates. GDS is designed to answer comparison questions that are not dependent on probability samples. The GDS database is huge, but its non-probability sample means analyses are best suited to highlight differences among user populations. GDS recruits younger, more experienced drug using populations. We spot emerging drugs trends before they enter into the general population.

In other words, by design the survey samples people who are more likely to drink regularly or to have used drugs, and the GDS itself therefore warns against the headline use of the numbers. It’s not really that 64% of the UK population that’s used cocaine the last year; it’s 64% of a self-selected group who are in a demographic that are more likely to have used cocaine and who responded to an online survey.

To emphasise this point the GDS information page identifies the following summary characteristics of respondents to the survey:

  • a 2:1 ratio of male:female;
  • 60% of participants with at least a university degree;
  • an average age of 25 years;
  • more than 50% of participants reporting to have regular involvement in nightlife and clubbing.

Clearly these characteristics are quite different from those of the population as a whole and, as intended by the study, orientated towards people that are more likely to have a drinking or drug habit. At which point the newspaper headlines become much less surprising.

Now, there’s nothing wrong with carrying out surveys in this way. If you’re interested in attitudes and behaviours among drinkers and drug users, there’s not much point in wasting time on people who indulge in neither. But… what you get out of this is a snapshot of people whose characteristics match those of the survey respondents, not of the population as a whole. And sure, this is all spelt out very clearly in the GDS report itself, but that doesn’t stop the tabloids (and even the Guardian) from headlines that make it seem like Britain is drink/drug capital of the world.

In summary:

  • You can extrapolate the results of a sample to a wider population only if the sample is genuinely representative of the whole population;
  • The best way of ensuring this is to do random sampling where each member of the population could be included in the sample;
  • The media aren’t going to let niceties of this type get in the way of a good headline, so you need to be extremely wary when reading media reports based on statistical surveys.

What seems to be a more scientific approach to studies in the variation of alcohol consumption across countries is available here. On this basis, at least in 2014, average alcohol consumption in the UK was considerably lower than that in, say, France or Germany. That’s not to say Brits got drunk less: it might still be that a proportion of people drink excessively – to the point of getting drunk – but the overall average is still relative low.

However, if you look down the page there’s this graph…

…which can be interpreted as giving the proportion of each country’s population – admittedly in 2010 – who had at least one heavy night out in a period of 30 days. France and the UK are pretty much level on this basis, and not particularly extreme. Lithuania seems to be the most excessive European country in these terms, while king of the world is apparently Madagascar, where 64.8% of the population reported a heavy drinking session over the 30 day period. So…

It’s official: Madagascans get drunk more often than anywhere else in the WORLD

No human is limited

Do you run a bit? If so, chances are you can run 100 metres in 17 seconds. Which puts you in the same class as the Kenyan marathon runner Eliud Kipchoge.

Just one small catch: you have to keep that pace going for 2 hours.

In an earlier post  I discussed how Kipchoge had made an attempt at a sub-2-hour marathon in Monza, Italy, but failed. Just. Well, as you probably know, this weekend he successfully repeated the attempt this weekend in Vienna, beating the 2-hour milestone by almost 20 seconds.

The theme of that earlier post was whether Statistics could be used to predict ultimate performance times: what is the fastest time possible for any human to run 26.2 miles? There must be some limit, but can we use data to predict what it will be? I included this graph in the previous post to make the point:

This graphic is actually unchanged despite Kipchoge’s Vienna run because, as in Italy, the standard IAAF conditions were not met. In particular:

  1. Kipchoge was supported by a rotating team of 41 pace runners who, as well as setting the pace, formed an effective windshield;
  2. A pace car equipped with a laser beam was used to point to the ideal running point for Kipchoge on the road.

So, we can’t add Kipchoge’s 1:59:40 to the graphic. But, his race time demonstrates that 2 hours is not a physical barrier, and one might guess that it’s just a matter of time before a 2-hour marathon is achieved under official IAAF conditions. Probably by Kipchoge.

Other things were also designed to maximise Kipchoge’s performance:

  1. The race circuit was completely flat;
  2. Kipchoge was wearing specially designed shoes (provided by Nike) that are estimated to improve his running economy by 7-8%.
  3. His drinks were provided by a support team on bicycles to avoid him having to slow down to collect refreshments.
  4. The event was sponsored by Ineos, a multibillion dollar chemical company (with a dodgy environmental record.)

Nonetheless: what an astonishing achievement!

Undoubtedly there is a limit to what’s humanly possible for a marathon race time, but records will almost certainly  continue to be broken as the limit is approached in smaller and smaller increments. However, as discussed in the original post, Statistics is unlikely to provide accurate answers to what that limit will be. An analysis of the available data in 1980 would most likely have suggested an ultimate limit somewhere above 2 hours. But seeing the more recent data, and knowing what happened at the weekend, it seems likely that this threshold will be eventually broken in an official race sometime.

This is a bit misleading though. What we’ve discussed so far is extrapolating the data in the graph above without taking their context into account. Yet the data do have a context, and this suggests that, above and beyond improvements in training regimes and running equipment, the ultimate limit will be determined by the boundaries of human physiology. And this implies that biological and physical rules will apply. Indeed, research published in 1985 suggested an absolute limit for the marathon of 1:57:58. This research comprised of a statistical analysis, but in combination with models of human consumption of oxygen rates for energy conversion. Who knows if this prediction will stand the test of time or not, but the fact that it is based on an analysis which combines Statistics with the relevant Science suggests that it is more reliable than an extrapolation based solely on abstract numbers.


Footnote 1:

An article in the Observer on Sunday described Kipchoge’s Vienna run in a similar context, discussing the limits that there might be on human sporting achievements. It also listed a number of  long-standing sporting records, including Paula Radcliffe’s record women’s marathon time of 2:15:25, which was made in 2003. By Sunday afternoon that record was smashed by a margin of 81 seconds by the Kenyan runner Brigid Kosgei.


Footnote 2:

For most people running marathons, the 2-hour threshold is, let’s say, not especially relevant. Some general statistics on marathon performance from a database of more than 3 million runners is available here.

It includes the following histogram of race times, which I found interesting. Actually it’s 2 histograms, one in blue (for women) superimposed on that in red (for men).

Both histograms have unusual shapes which seem to tell us something about marathon runners. Can you explain what?

I’ll update this post with my own thoughts in a week or so.

Magic

Here’s a statistical card trick. As I try to explain in the video, admittedly not very clearly, the rules of the trick are as follows:

  1. Matteo picks a card at random from the pack. This card is unknown to me.
  2. I shuffle the cards and turn them over one at a time.
  3. As I turn the cards over, Matteo counts them in his head until he reaches that number in the sequence. As you’ll see, his card was a 5, so he counts the cards until he reaches the 5th one.
  4. He then repeats that process, starting with the value of the 5th card, which happened to be a 10. So, he counts – again silently – a further 10 cards. He remembers the value of that card, and counts again that many cards.
  5. And so on until we run out of cards.
  6. (Picture cards count as 10.)
  7. Matteo has to remember the last card in his sequence before all of the cards ran out.
  8. And I – the magician – have to predict what that card was.

Now take a look at the video….

How did I do it? And what’s it got to do with Statistics? I’ll explain in a future post, but as usual if you’d like to write to me with your ideas I’ll be very happy to hear from you.

Not so clever

You remember that thing about well-produced statistical diagrams telling their own story without the need for additional words?

Well, the same thing goes for badly produced statistical diagrams:


Thanks to Luigi.Colombo@Smartodds.co.uk for giving me this idea for a post.

No smoke without fire

No one seriously now doubts that cigarette smoking increases your risk of lung cancer and many other diseases, but when the evidence for a relationship between smoking and cancer was first presented in the 1950’s, it was strongly challenged by the tobacco industry.

The history of the scientific fight to demonstrate the harmful effects of smoking is summarised in this article. One difficulty from a statistical point of view was that the primary evidence based on retrospective studies was shaky, because smokers tend to give unreliable reports on how much they smoke. Smokers with illnesses tend to overstate how much they smoke; those who are healthy tend to understate their cigarette consumption. And these two effects lead to misleading analyses of historically collected data.

An additional problem was the difficulty of establishing causal relationships from statistical associations. Similar to the examples in a previous post, just because there’s a correlation between smoking and cancer, it doesn’t necessarily mean that smoking is a risk factor for cancer. Indeed, one of the most prominent statisticians of the time – actually of any time – Sir Ronald Fisher, wrote various scientific articles explaining how the correlations observed between smoking and cancer rates could easily be explained by the presents of lurking variables that induce spurious correlations.

At which point it’s worth noting a couple more ‘coincidences’: Fisher was a heavy smoker himself and also an advisor to the Tobacco Manufacturers Standing Committee. In other words, he wasn’t exactly neutral on the matter. But, he was a highly respected scientist, and therefore his scepticism carried considerable weight.

Eventually though, the sheer weight of evidence – including that from long-term prospective studies – was simply too overwhelming to be ignored, and governments fell into line with the scientific community in accepting that smoking is a high risk factor for various types of cancer.

An important milestone in that process was the work of another British statistician, Austin Bradford Hill. As well as being involved in several of the most prominent cases studies linking cancer to smoking, he also developed a set of 9 (later extended to 10) criteria for establishing a causal relationship between processes. Though still only guidelines, they provided a framework that is still used today for determining whether associated processes include any causal relationships. And by these criteria, smoking was clearly shown to be a risk factor for smoking.

Now, fast-forward to today and there’s a similar debate about global warming:

  1. Is the planet genuinely heating up or is it just random variation in temperatures?
  2. If it’s heating up, is it a consequence of human activity, or just part of the natural evolution of the planet?
  3. And then what are the consequences for the various bio- and eco-systems living on it?

There are correlations all over the place – for example between CO2 emissions and average global temperatures as described in an earlier post – but could these possibly just be spurious and not indicative of any causal relationships?  Certainly there are industries with vested interests who would like to shroud the arguments in doubt. Well, this nice article applies each of Bradford Hill’s criteria to various aspects of climate science data and establishes that the increases in global temperatures are undoubtedly caused by human activity leading to CO2 release in the atmosphere, and that many observable changes to biological and geographical systems are a knock-on effect of this relationship.

In summary: in the case of the planet, the smoke that we see <global warming> is definitely a consequence of the fire we stared <the increased amounts of CO2 released into the atmosphere>.