It’s official: Brits get drunk more often than anywhere else in the WORLD

A while back the Global Drug Survey (GDS) produced its annual report. Here are some of the newspaper headlines following its publication:

It’s official: Brits get drunk more often than anywhere else in the WORLD. (The Mirror)

Britons get drunk more often than 35 other nations, survey finds. (The Guardian)

Brits are world’s biggest boozers and we get hammered once a week, study says. (The Sun)

And reading some of these articles in detail we find:

  • Of the 31 countries included in the study, Britons get drunk most regularly (51.1 times per year, on average).
  • Britain has the highest rate of cocaine usage (74% of participants in the survey say they have used it at some point).
  • 64% of English participants in the survey claim to have used cocaine in the last year.

Really? On average Brits are getting drunk once a week? And 64% of the population have used cocaine in the last year? 64%!

Prof Adam Winstock, founder of the survey, summarises things thus:

In the UK we don’t tend to do moderation, we end up getting drunk as the point of the evening.

At which point it’s important to take a step back and understand how the GDS works. If you want a snapshot of a population as a whole, you have to sample in such a way that every person in the population is equally likely to be sampled. Or at least ensure by some other mechanism that the sample is truly representative of the population. But the Global Drug Survey is different: it’s an online survey targeted at people whose demographics coincide with people who are more likely to be regular drinkers and/or drug users.

Consequently, it’s safe to conclude that the Brits who chose to take this survey are likely to get drunk more often than people from other countries who also completed the survey. And that 64% of British participants in the survey have used cocaine last year. But since this sample is neither random nor designed to be representative, it really tells us nothing about the population as a whole. And even comparisons of the respondents across countries should be treated cautiously: perhaps the differences are not due to variations in drink/drug usage but instead due to variations in the composition of the survey respondents across countries.

Here’s what the GDS say themselves about this…

Don’t look to GDS for national estimates. GDS is designed to answer comparison questions that are not dependent on probability samples. The GDS database is huge, but its non-probability sample means analyses are best suited to highlight differences among user populations. GDS recruits younger, more experienced drug using populations. We spot emerging drugs trends before they enter into the general population.

In other words, by design the survey samples people who are more likely to drink regularly or to have used drugs, and the GDS itself therefore warns against the headline use of the numbers. It’s not really that 64% of the UK population that’s used cocaine the last year; it’s 64% of a self-selected group who are in a demographic that are more likely to have used cocaine and who responded to an online survey.

To emphasise this point the GDS information page identifies the following summary characteristics of respondents to the survey:

  • a 2:1 ratio of male:female;
  • 60% of participants with at least a university degree;
  • an average age of 25 years;
  • more than 50% of participants reporting to have regular involvement in nightlife and clubbing.

Clearly these characteristics are quite different from those of the population as a whole and, as intended by the study, orientated towards people that are more likely to have a drinking or drug habit. At which point the newspaper headlines become much less surprising.

Now, there’s nothing wrong with carrying out surveys in this way. If you’re interested in attitudes and behaviours among drinkers and drug users, there’s not much point in wasting time on people who indulge in neither. But… what you get out of this is a snapshot of people whose characteristics match those of the survey respondents, not of the population as a whole. And sure, this is all spelt out very clearly in the GDS report itself, but that doesn’t stop the tabloids (and even the Guardian) from headlines that make it seem like Britain is drink/drug capital of the world.

In summary:

  • You can extrapolate the results of a sample to a wider population only if the sample is genuinely representative of the whole population;
  • The best way of ensuring this is to do random sampling where each member of the population could be included in the sample;
  • The media aren’t going to let niceties of this type get in the way of a good headline, so you need to be extremely wary when reading media reports based on statistical surveys.

What seems to be a more scientific approach to studies in the variation of alcohol consumption across countries is available here. On this basis, at least in 2014, average alcohol consumption in the UK was considerably lower than that in, say, France or Germany. That’s not to say Brits got drunk less: it might still be that a proportion of people drink excessively – to the point of getting drunk – but the overall average is still relative low.

However, if you look down the page there’s this graph…

…which can be interpreted as giving the proportion of each country’s population – admittedly in 2010 – who had at least one heavy night out in a period of 30 days. France and the UK are pretty much level on this basis, and not particularly extreme. Lithuania seems to be the most excessive European country in these terms, while king of the world is apparently Madagascar, where 64.8% of the population reported a heavy drinking session over the 30 day period. So…

It’s official: Madagascans get drunk more often than anywhere else in the WORLD

No human is limited

Do you run a bit? If so, chances are you can run 100 metres in 17 seconds. Which puts you in the same class as the Kenyan marathon runner Eliud Kipchoge.

Just one small catch: you have to keep that pace going for 2 hours.

In an earlier post  I discussed how Kipchoge had made an attempt at a sub-2-hour marathon in Monza, Italy, but failed. Just. Well, as you probably know, this weekend he successfully repeated the attempt this weekend in Vienna, beating the 2-hour milestone by almost 20 seconds.

The theme of that earlier post was whether Statistics could be used to predict ultimate performance times: what is the fastest time possible for any human to run 26.2 miles? There must be some limit, but can we use data to predict what it will be? I included this graph in the previous post to make the point:

This graphic is actually unchanged despite Kipchoge’s Vienna run because, as in Italy, the standard IAAF conditions were not met. In particular:

  1. Kipchoge was supported by a rotating team of 41 pace runners who, as well as setting the pace, formed an effective windshield;
  2. A pace car equipped with a laser beam was used to point to the ideal running point for Kipchoge on the road.

So, we can’t add Kipchoge’s 1:59:40 to the graphic. But, his race time demonstrates that 2 hours is not a physical barrier, and one might guess that it’s just a matter of time before a 2-hour marathon is achieved under official IAAF conditions. Probably by Kipchoge.

Other things were also designed to maximise Kipchoge’s performance:

  1. The race circuit was completely flat;
  2. Kipchoge was wearing specially designed shoes (provided by Nike) that are estimated to improve his running economy by 7-8%.
  3. His drinks were provided by a support team on bicycles to avoid him having to slow down to collect refreshments.
  4. The event was sponsored by Ineos, a multibillion dollar chemical company (with a dodgy environmental record.)

Nonetheless: what an astonishing achievement!

Undoubtedly there is a limit to what’s humanly possible for a marathon race time, but records will almost certainly  continue to be broken as the limit is approached in smaller and smaller increments. However, as discussed in the original post, Statistics is unlikely to provide accurate answers to what that limit will be. An analysis of the available data in 1980 would most likely have suggested an ultimate limit somewhere above 2 hours. But seeing the more recent data, and knowing what happened at the weekend, it seems likely that this threshold will be eventually broken in an official race sometime.

This is a bit misleading though. What we’ve discussed so far is extrapolating the data in the graph above without taking their context into account. Yet the data do have a context, and this suggests that, above and beyond improvements in training regimes and running equipment, the ultimate limit will be determined by the boundaries of human physiology. And this implies that biological and physical rules will apply. Indeed, research published in 1985 suggested an absolute limit for the marathon of 1:57:58. This research comprised of a statistical analysis, but in combination with models of human consumption of oxygen rates for energy conversion. Who knows if this prediction will stand the test of time or not, but the fact that it is based on an analysis which combines Statistics with the relevant Science suggests that it is more reliable than an extrapolation based solely on abstract numbers.


Footnote 1:

An article in the Observer on Sunday described Kipchoge’s Vienna run in a similar context, discussing the limits that there might be on human sporting achievements. It also listed a number of  long-standing sporting records, including Paula Radcliffe’s record women’s marathon time of 2:15:25, which was made in 2003. By Sunday afternoon that record was smashed by a margin of 81 seconds by the Kenyan runner Brigid Kosgei.


Footnote 2:

For most people running marathons, the 2-hour threshold is, let’s say, not especially relevant. Some general statistics on marathon performance from a database of more than 3 million runners is available here.

It includes the following histogram of race times, which I found interesting. Actually it’s 2 histograms, one in blue (for women) superimposed on that in red (for men).

Both histograms have unusual shapes which seem to tell us something about marathon runners. Can you explain what?

I’ll update this post with my own thoughts in a week or so.

Magic

Here’s a statistical card trick. As I try to explain in the video, admittedly not very clearly, the rules of the trick are as follows:

  1. Matteo picks a card at random from the pack. This card is unknown to me.
  2. I shuffle the cards and turn them over one at a time.
  3. As I turn the cards over, Matteo counts them in his head until he reaches that number in the sequence. As you’ll see, his card was a 5, so he counts the cards until he reaches the 5th one.
  4. He then repeats that process, starting with the value of the 5th card, which happened to be a 10. So, he counts – again silently – a further 10 cards. He remembers the value of that card, and counts again that many cards.
  5. And so on until we run out of cards.
  6. (Picture cards count as 10.)
  7. Matteo has to remember the last card in his sequence before all of the cards ran out.
  8. And I – the magician – have to predict what that card was.

Now take a look at the video….

How did I do it? And what’s it got to do with Statistics? I’ll explain in a future post, but as usual if you’d like to write to me with your ideas I’ll be very happy to hear from you.

Not so clever

You remember that thing about well-produced statistical diagrams telling their own story without the need for additional words?

Well, the same thing goes for badly produced statistical diagrams:


Thanks to Luigi.Colombo@Smartodds.co.uk for giving me this idea for a post.

No smoke without fire

No one seriously now doubts that cigarette smoking increases your risk of lung cancer and many other diseases, but when the evidence for a relationship between smoking and cancer was first presented in the 1950’s, it was strongly challenged by the tobacco industry.

The history of the scientific fight to demonstrate the harmful effects of smoking is summarised in this article. One difficulty from a statistical point of view was that the primary evidence based on retrospective studies was shaky, because smokers tend to give unreliable reports on how much they smoke. Smokers with illnesses tend to overstate how much they smoke; those who are healthy tend to understate their cigarette consumption. And these two effects lead to misleading analyses of historically collected data.

An additional problem was the difficulty of establishing causal relationships from statistical associations. Similar to the examples in a previous post, just because there’s a correlation between smoking and cancer, it doesn’t necessarily mean that smoking is a risk factor for cancer. Indeed, one of the most prominent statisticians of the time – actually of any time – Sir Ronald Fisher, wrote various scientific articles explaining how the correlations observed between smoking and cancer rates could easily be explained by the presents of lurking variables that induce spurious correlations.

At which point it’s worth noting a couple more ‘coincidences’: Fisher was a heavy smoker himself and also an advisor to the Tobacco Manufacturers Standing Committee. In other words, he wasn’t exactly neutral on the matter. But, he was a highly respected scientist, and therefore his scepticism carried considerable weight.

Eventually though, the sheer weight of evidence – including that from long-term prospective studies – was simply too overwhelming to be ignored, and governments fell into line with the scientific community in accepting that smoking is a high risk factor for various types of cancer.

An important milestone in that process was the work of another British statistician, Austin Bradford Hill. As well as being involved in several of the most prominent cases studies linking cancer to smoking, he also developed a set of 9 (later extended to 10) criteria for establishing a causal relationship between processes. Though still only guidelines, they provided a framework that is still used today for determining whether associated processes include any causal relationships. And by these criteria, smoking was clearly shown to be a risk factor for smoking.

Now, fast-forward to today and there’s a similar debate about global warming:

  1. Is the planet genuinely heating up or is it just random variation in temperatures?
  2. If it’s heating up, is it a consequence of human activity, or just part of the natural evolution of the planet?
  3. And then what are the consequences for the various bio- and eco-systems living on it?

There are correlations all over the place – for example between CO2 emissions and average global temperatures as described in an earlier post – but could these possibly just be spurious and not indicative of any causal relationships?  Certainly there are industries with vested interests who would like to shroud the arguments in doubt. Well, this nice article applies each of Bradford Hill’s criteria to various aspects of climate science data and establishes that the increases in global temperatures are undoubtedly caused by human activity leading to CO2 release in the atmosphere, and that many observable changes to biological and geographical systems are a knock-on effect of this relationship.

In summary: in the case of the planet, the smoke that we see <global warming> is definitely a consequence of the fire we stared <the increased amounts of CO2 released into the atmosphere>.

Massively increase your bonus

In one of the earliest posts to the blog last year I set a puzzle where I suggested Smartodds were offering employees the chance of increasing their bonus, and you had to decide whether it was in their interests to accept the offer or not.

<They weren’t, and they still aren’t, but let’s play along>.

Same thing this year, but the rules are different. Eligible employees are invited to gamble their bonus at odds of 10-1 based on the outcome of a game. It works like this…

For argument’s sake, let’s suppose there are 100 employees that are entitled to a bonus. They are told they each have the opportunity to increase their bonus by a factor of 10 by playing the following game:

  • Each of the employees is randomly assigned a number between 1 and 100.
  • Inside a room there are 100 boxes, also labelled 1 to 100.
  • 100 cards, numbered individually from 1 to 100, have been randomly placed inside the boxes, so each numbered box contains a card with a unique random number from 1 to 100. For example, box number 1 might contain the card with number 62; box number 2 might contain the card with number 25; and so on.
  • Each employee must enter the room, one a a time, and can choose any 50 of the boxes to open. If they find the card with their own number in one of those boxes, they win. Otherwise they lose.
  • Though the employees may discuss the game and decide how they will play before they enter the room, they must not convey any information to the other employees after taking their turn.
  • The employees cannot rearrange any of the boxes or the cards – so everyone finds the room in the same state when they enter.
  • The employees will have their bonus multiplied by 10 if all 100 of them are winners. If there is a single loser, they all end up with zero bonus.

Should the employees accept this game, or should they refuse it and keep their original bonuses? And if they accept to play, should they adopt any particular strategy for playing the game?

Give it some thought and then scroll down for some discussion.

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

A good place to start is to calculate the probability that any one employee is a winner. This happens if one of the 50 boxes they open, out of the 100 available, contains the card with their number. Each box is equally likely to contain their number, so you can easily write down the probability that they win. Scroll down again for the answer to this part:

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

There are 100 boxes, and the employee selects 50 of them. Each box is equally likely to contain their number, so the probability they find their number in one of the boxes is 50/100 or 1/2.

So that’s the probability that any one employee wins. We now need to calculate the probability that they all win – bearing in mind the rules of the game – and then decide whether the bet is worth taking.

In summary:

  • There are 100 employees;
  • The probability that any one employee wins their game is 1/2;
  • If they all win, their bonuses will all be multiplied by 10;
  • If any one of them loses, they all get zero bonus.

Should the employees choose to play or to keep their original bonus? And if they play, is there any particular strategy they should adopt?

If you’d like to send me your answers I’d be really happy to hear from you. If you prefer just to send me a yes/no answer, perhaps just based on your own intuition, I’d be equally happy to get your response, and you can use this form to send the answer in that case.


This is a variant on a puzzle pointed out to me by Fabian.Thut@smartodds.co.uk. I think it’s a little more tricky than previous puzzles I’ve posted, but it illustrates a specific important statistical issue that I’ll discuss when giving the solution.

Cause and effect

statistics+maybe

If you don’t see why this cartoon is funny, hopefully you will by the end of this post.

The following graph shows the volume of crude oil imports from Norway to the US and the number of drivers killed in collisions with trains, each per year:

There is clearly a very strong similarity between the two graphs. To determine the strength of similarity the standard way of measuring statistical association between two series is with the correlation coefficient. If the two series were completely unrelated the correlation coefficient would be zero. If they were perfectly in synch it would be 1. For the two series in the graph the correlation coefficient is 0.95, which is pretty close to 1.  So, you’d conclude that crude oil imports and deaths due to train collisions are strongly associated with one another – as one goes up, so does the other, and vice versa.

But this is crazy. How can oil imports and train deaths possibly be related?

This is just one of a number of examples of spurious correlations kindly sent to me by Olga.Turetskaya@smartodds.co.uk. Other examples there include:

  1. The number of deaths by drowning and the number of films Nicolas Cage has appeared in;
  2. Cheese consumption and the number of deaths by entanglement in bedsheets;
  3. Divorce rates and consumption of margarine.

In each case, like in the example above, the correlation coefficient is very close to 1. But equally in each case, it’s absurd to think that there could be any genuine connection between the processes, regardless of what statistics might say. So, what’s going on? Is Statistics wrong?

No, Statistics is never wrong, but the way it’s used and interpreted often is.

There are two possible explanations for spurious correlations like the one observed above:

  1. The processes might be genuinely correlated, but not due to any causal relationship. Instead, both might be affected in the same way by some other variable lurking in the background. Wikipedia gives the example of ice cream sales being correlated with deaths by drowning. Neither causes the other, but they tend to be simultaneously both large or low – and therefore correlated – because each increases in periods of very hot weather.
  2. The processes might not be correlated at all, but just by chance due to random variation in the data, they look correlated. This is unlikely to happen with just a single pair of processes, but if we scan through enough possible pairs, some are bound to.

Most likely, for series like the one shown in the graph above, there’s a bit of both of these effects in play. Crude oil imports and deaths by train collisions have probably both diminished in time for completely unrelated reasons. This is the first of those effects, where time is the lurking variable having a similar effect on both oil imports and train collisions. But on top of that, the random looking oscillations in the curves, which occur at around the same times for each series, are probably just chance coincidences. Most series that are uncorrelated won’t share such random-looking variations, but once in so often they will, just by chance. And the processes shown in the graph above might be the one pair out of thousands that have been examined which have this unusual similarity just by chance.

So, for both these reasons, correlation between variables doesn’t establish a causal relationship. And that’s why the cartoon above is funny. But if we can’t use correlation to establish whether a relationship is causal or not, what can we use?

We’ll discuss this in a future post.


Meantime, just in case you haven’t had your fill of spurious correlations, you can either get a whole book-full of them at Amazon  or use this page to explore many other possible examples.

Killfie

I recently read that more than 250 people died between 2011 and 2017 taking selfies (so-called killfies). A Wikipedia entry gives a list of some of these deaths, as well as injuries, and categorises the fatalities as due to the following causes:

  • Transport
  • Electrocution
  • Fall
  • Firearm
  • Drowned
  • Animal
  • Other

If you have a macabre sense of humour it makes for entertaining reading while also providing you with useful life tips: for example, don’t take selfies with a walrus.

More detail on some of these incidents can also be found here.

Meanwhile, this article includes the following statistically-based advice:

Humanity is actually very susceptible to selfie death. Soon, you will be more likely to die taking a selfie than you are getting attacked by a shark. That’s not me talking: that’s statistical likelihood. Stay off Instagram and stay alive

Yes, worry less about sharks, but a bit more about Instagram. Thanks Statistics.

The original academic article which identified the more than 250 selfie deaths is available here. It actually contains some interesting statistics:

  • Men are more susceptible to death-by-selfie than women, even though women take more selfies;
  • Most deaths occur in the 20-29 age group;
  • Men were more likely to die taking high-risk selfies than women;
  • Most selfie deaths due to firearms occurred in the United States;
  • The highest number of selfie deaths is in India.

None of these conclusions seems especially surprising to me, except the last one. Why India? Have a think yourself why that might be before scrolling down:

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

There are various possible factors. Maybe it’s because the population in India is so high. Maybe people just take more selfies in India. Maybe the environment there is more dangerous. Maybe India has a culture for high risk-taking. Maybe it’s a combination of these things.

Or maybe… if you look at the academic paper I referred to above, the authors are based at Indian academic institutes and describe their methodology as follows:

We performed a comprehensive search for keywords such as “selfie deaths; selfie accidents; selfie mortality; self photography deaths; koolfie deaths; mobile death/accidents” from news reports to gather information regarding selfie deaths.

I have no reason to doubt the integrity of these scientists, but it’s easy to imagine that their knowledge of where to look in the media for reported selfie deaths was more complete for Indian sources than for those of other countries. In which case, they would introduce an unintentional bias in their results by accessing a disproportionate number of reports of deaths in India.


In conclusion: be sceptical about any statistical analysis. If the sampling is biased for any reason, the conclusions almost certainly will be as well.

Proof reading

In an earlier post I described what’s generally known as the Mutilated Chessboard Puzzle. It goes like this: a chessboard has 2 diagonally opposite corners removed. The challenge is to cover the remaining 62 squares with 31 dominoes, each of which can cover 2 adjacent horizontal or vertical squares. Or, to show that such a coverage is impossible.

Several of you wrote to me about this, in many cases providing the correct solution. Thanks and congratulations to all of you.

The correct solution is that it is impossible to cover the remaining squares of the chessboard this way. But what’s interesting about this puzzle – to me at least – is what it illustrates about mathematical proof.

There would essentially be 2 ways to prove the impossibility of  a domino coverage. One way would be to enumerate every possible configuration of the 31 dominoes, and to show that none of these configurations covers the 62 remaining squares on the chessboard. But this takes a lot of time – there are many different ways of laying the dominoes on the chessboard.

The alternative approach is to ‘step back’ and try to reason logically why such a configuration is impossible. This approach won’t always work, but it’s often short and elegant when it does. And with the mutilated chessboard puzzle, it works beautifully…

When you place a domino on a chessboard it will cover 1 black and 1 red square (using the colours in the diagram above). So, 31 dominoes will cover 31 black and 31 red squares. But if you remove diagonally opposite corners from a chessboard, they will be of the same colour, so you’re left with either 32 black squares and 30 red, or vice versa. But you’re never left with 31 squares of each colour, which is the only pattern possible with 31 dominoes. So it’s impossible and the result is proved. Simply and beautifully.

As I mentioned in the previous post the scientific writer Cathy O’Neil cites having been shown this puzzle by her father at a young age as the trigger for her lifelong passion for mathematics. And maybe, even if you don’t have a passion for mathematics yourself, you can at least see why the elegance of this proof might trigger someone’s love for mathematics in the way it did for Cathy.

Having said all that, computer technology now makes proof by enumeration possible in situations where the number of configurations to check might be very large. But structured mathematical thinking is still often necessary to determine the parameters of the search. A good example of this is the well-known four colour theorem. This states that if you take any region that’s been divided into sub-regions – like a map divided into countries – then you only need four colours to shade the map in such a way that no adjacent regions have the same colour.

Here’a an example from the Wiki post:

You can see that, despite the complexity of the sub-regions, only 4 colours were needed to achieve a colouring in which no two adjacent regions have the same colour.

But how would you prove that any map of this type would require at most 4 colours? Ideally, as with the mutilated chessboard puzzle, you’d like a ‘stand back’ proof, based on pure logic. But so far no one has been able to find one. Equally, enumeration of all possible maps is clearly impossible – any region can be divided into subregions in infinitely many ways.

Yet a proof has been found which is a kind of hybrid of the ‘stand back’ and ‘enumeration’ approaches. First, a deep understanding of mathematical graphs was used to reduce the infinitely many possible regions to a finite number – actually, around 2000 – of maps to consider. That’s to say, it was shown that it’s not necessary to consider all possible regional mappings – if a 4-colour shading of a certain set of  2000ish different maps could be found, this would be enough to prove that such a shading existed for all possible maps. Then a computer algorithm was developed to search for a 4-colour shading for each of the identified 2000 or so maps.  Putting all of this together completed the proof that a 4-colour shading existed for any map, not just the ones included in the search.

Now, none of this is strictly Statistics, though Cathy O’Neil’s book that I referred to in the previous post is in the field of data science, which is at least a close neighbour of Statistics. But in any case, Statistics is built on a solid mathematical framework, and things that we’ve seen in previous posts like the Central Limit Theorem – the phenomenon by which the frequency distributions of many naturally occurring phenomena end up looking bell-shaped – are often based on the proof of a formal mathematical expression, which in some cases is as simple and elegant as that of the mutilated chessboard puzzle.


I’ll stop this thread here so as to avoid a puzzle overload, but I did want to mention that there is an extension of the Mutilated Chessboard Puzzle. Rather than removing 2 diagonally opposite corners, suppose I remove any 2 arbitrary squares, possibly adjacent, possibly not. In that case, can the remaining squares be covered by 31 dominoes?

If the 2 squares removed are of the same colour, the solution given above works equally well, so we know the problem can’t be solved in that case. But what if I remove one black and one red square? In that case, can the remaining squares be covered by the 31 dominoes:

  1. Always;
  2. Sometimes; or
  3. Never?

I already sent this problem to some of you who’d sent me a solution to the original problem. And I should give a special mention to Fabian.Thut@smartodds.co.uk  who provided a solution which is completely different to the standard textbook solution. Which illustrates another great thing about mathematics: there is often more than solution to the same problem. If you’d like to try this extension to the original problem, or discuss it with me, please drop me a line.

 

 

Statistics of the decade

Now that the nights are drawing in, our minds naturally turn to regular end-of-year events and activities: Halloween; Bonfire night; Christmas; New Year’s eve; and the Royal Statistical Society ‘Statistics of the Year’ competition.

You may remember from a previous post that there are 2 categories for Statistic of the Year: ‘UK’ and ‘International’. You may also remember that last year’s winners were 27.8% and 90.5% respectively. (Don’t ask, just look back at the previous post).

So, it’s that time again, and you are free to nominate your own statistics for the 2019 edition. Full details on the criteria for nominations are given at the RSS link above, but suggested categories include:

  • A statistic that debunks a popular myth;
  • A statistic relevant to a key news story or social trend;
  • A statistic relevant to a phenomenon/craze this year.

But if that’s not exciting enough, this year also sees the end of the decade, so you are also invited to nominate for ‘Statistic of the Decade’, again in UK and International categories. As the RSS say:

The Royal Statistical Society is not only looking for statistics that captured the zeitgeist of 2019, but as the decade draws to a close, we are also seeking statistics that can help define the 2010s.

So, what do you think? What statistics captured 2019’s zeitgeist for you? And which statistics helped define your 2010’s?

Please feel free to nominate to the RSS yourselves, but if you send me your nomination directly, I’ll post a collection of the replies I receive.


Thanks to Luigi.Colombo@Smartodds.co.uk for pointing out to me that the nominations for this year were now open.