I made the point in last week’s Valentine’s Day post, that although the emphasis of this blog is about the methodology of using Statistics to understand the world through the analysis of data, it’s often the case that statistics in themselves tell their own story. In this way we learnt that a good proportion of the population of the UK buy their pets presents for Valentine’s Day.
Plummeting insect numbers `threaten collapse of nature’
The statistical and scientific evidence now points to the fact that, at current rates of decline, all insects could be extinct by the end of the century. Admittedly, it’s probably not great science or statistics to extrapolate the current annual loss of 2.5% in that way, but nevertheless it gives you a picture of the way things are going. This projected elimination of insects would be, by some definitions, the sixth mass extinction event on earth. (Earlier versions wiped out dinosaurs and so on).
And before you go all Donald Trump, and say ‘bring it on: mosquito-free holidays’, you need to remember that life on earth is a complex ecological system in which the big things (including humans) are indirectly dependent on the little things (including insects) via complex bio-mechanisms for mutual survival. So if all the insects go, all the humans go too. And this is by the end of the century, remember.
So, yeah, let’s do our best to make money for our clients. But let’s also not forget that money only has value if we have a world to spend it in, and use Statistics and all other means at our disposal to fight for the survival of our planet and all the species that live on it.
You probably remember the NFL quarterback Colin Kaepernick who started the protest against racism in the US by kneeling during the national anthem. In an earlier post I discussed how his statistics suggested he was being shunned by NFL teams due to his political stance. And in a joint triumph for decency and marketing, he subsequently became the current face of Nike.
Since I now follow Kaepernick on Twitter, I recently received a tweet sent by Eric Reid of the Carolina Panthers. Reid was the first player to kneel alongside Kaepernick when playing for the San Francisco 49ers. But when his contract expired in March 2018, Reid also struggled to find a new club, despite his form suggesting he’d be an easy selection. Eventually, he joined Carolina Panthers after the start of the 2018-19 season, and opened a dispute with the NFL, claiming that, like Kaepernick, he had been shunned by most teams as a consequence of his political actions.
The ‘7’ refers to the fact that Reid had been tested seven times since joining the Panthers in the standard NFL drug testing programme, and the “random” is intended ironically. That’s to say, Reid is implying that he’s being tested more often than is plausible if tests are being carried out randomly: in other words, he’s being victimised for the stand he’s taking against the NFL
I’ve been here 11 weeks, I’ve been drug-tested seven times. That has to be statistically impossible. I’m not a mathematician, but there’s no way that’s random.
Well, let’s get one thing out of the way first of all: the only things that are statistically impossible are the things that are actually impossible. And since it’s possible that a randomised allocation of tests could lead to seven or more tests in 11 weeks, it’s certainly not impossible, statistically or otherwise.
However… Statistics is almost never about the possible versus the impossible; yes versus no; black versus white (if you’ll excuse the double entendre). Statistics is really about degrees of belief. Does the evidence suggest one version is more likely than another? And to what extent is that conclusion reliable?
Another small technicality… it seems that the first of Reid’s drug tests was actually a mandatory test that all players have to take when signing on for a new team. So actually, the question is whether the subsequent 6 tests in 11 weeks are unusually many if the tests are genuinely allocated randomly within the team roster.
On the face of it, this is a simple and standard statistical calculation. There are 72 players on a team roster and 10 players each week are selected for testing. So, under the assumption of random selection, the probability that any one player is tested any week is 10/72. Standard results then imply that the probability of a player being selected on exactly 6 out of 11 occasions – using the binomial distribution for those of you familiar with this stuff – is around 0.16%, while the probability of being tested 6 times or more is 0.17%. On this basis, there’s only a 17 in 10,000 chance that Reid would have been tested at least as often as he has been under a genuinely random procedure, and this would normally be considered small enough to provide evidence that the procedure is not random, and that Reid has been tested unduly often.
However, we need to be a bit careful. Some time ago, in an offsite talk (mentioned here) I discussed the fact that 4 members of the quant team shared the same birthday, and showed that this was apparently an infinitesimally unlikely occurrence. But by considering the fact that it would have seemed surprising for any 4 individuals in the company to share the same birthday, and that there are many such potential combinations of 4 people, the event turned out not to be so very surprising after all.
And there’s a similar issue here… Reid is just one of 72 players on the roster. It happened to be Reid that was tested unusually often, but we’d have been equally surprised if any individual player had been tested at least 6 times in eleven weeks. Is it surprising, though, that at least one of the 72 players gets tested this often? This is tricky to answer exactly, but can easily be done by simulation. Working this way I found the probability to be around 6.25%. Still unlikely, but not beyond the bounds of plausibility. A rule-of-thumb that’s often applied – and often inappropriately applied – is that if something has less than a 5% probability of occurring by chance, it’s safe to assume that there is something systematic and not random which led to the results; bigger than 5% and we conclude that the evidence isn’t strong enough to exclude the effect just being a random occurrence. So in this case, we couldn’t rule out the possibility that the test allocations are random.
So we have two different answers depending on how the data is interpreted. If we treat the data as specific to Eric Reid, then yes, there is strong evidence to suggest he’s been tested more often than is reasonable if testing is random. But if we consider him as just an arbitrary player in the roster, the evidence isn’t overwhelming that anyone in the roster as a whole has been overly tested,
Which should we go with? Well, each provides a different and valid interpretation of the available data. I would argue – though others might see it differently – that it’s entirely reasonable in this particular case to consider the data just with regard to Eric Reid, since there is a prima facia hypothesis specifically about him in respect of his grievance case against the NFL. In other words, we have a specific reason to be focusing on Reid, that isn’t driven by a dredge through the data.
On this basis, I’d argue that it is perfectly reasonable to question the extent to which the allocation of drugs tests in the NFL is genuinely “random”, and to conclude that there is reasonable evidence that Eric Reid is being unfairly targeted for testing, presumably for political reasons. The number of tests he has faced isn’t ‘statistically impossible’, but sufficiently improbable to give strong weight to this hypothesis.
You might remember in a couple of earlier posts (here and here) I discussed the Royal Statistical Society’s ‘Statistic of the Year’ competition. I don’t have updates on the results of that competition for 2018 yet, but in the meantime I thought I’d do my own version, but with a twist: the worst use of Statistics in 2018.
To be honest, I only just had the idea to do this, so I haven’t been building up a catalogue of options throughout the year. Rather, I just came across an automatic winner in my twitter feed this week.
So, before announcing the winner, let’s take a look at the following graph:
This graph is produced by the Office for National Statistics, which is the UK government’s own statistical agency, and shows the change in average weekly wages in the UK, after allowance for inflation effects, for the period 2008-2018.
There are several salient points that one might draw from this graph:
Following the financial crash in 2008, wages declined steadily over a 6-year period to 2014, where they bottomed-out at around 10% lower than pre-crash levels.
The election of a Conservative/Lib Dem coalition government in 2010 didn’t have any immediate impact on the decline of wage levels. Arguably the policy of intense austerity may simply have exacerbated the problem.
Things started to pick up during 2014, most likely due to the effects of Quantitative Easing and other efforts to stimulate the economy by the Bank of England in the period after the crash.
Something sudden happened in 2016 which seems to have choked-off the recovery in wage levels. (If only there was a simple explanation for what that might be.)
Wages are currently at the same level as they were 7 years ago in 2011, and significantly lower than they were immediately following the financial crash in 2008.
So that’s my take on things. Possibly there are different interpretations that are equally valid and plausible. I struggle, however, to accept the following interpretation, to which I am awarding the 2018 worst use of Statistics award:
ONS data showing real wages rising at fastest rate in 10 years… is good news for working Britain
Now, believe me, I’ve looked very hard at the graph to try to find a way in which this statement provides a reasonable interpretation of it, but I simply can’t. You might argue that wages grew at the fastest rate in a decade during 2015, but only then because wages had performed so miserably in the preceding years. But any reasonable interpretation of the graph suggests current wages have flatlined since 2016, and it’s simply misleading to suggest that wages are currently rising at the fastest rate in 10 years.
So, my 2018 award for the worst use of Statistics goes to…
… Dominic Raab, who until his recent resignation was the Secretary of State responsible for the United Kingdom’s withdrawal from the European Union (i.e. Brexit) and is a leading contender to replace Theresa May as the next leader of the Conservative Party.
Well done Dominic. Whether due to mendacity or ignorance, you are a truly worthy winner.
We think that this is the most extreme version and it’s not based on facts. It’s not data-driven. We’d like to see something that is more data-driven.
Wow! Who is this staunch defender of statistical methodology? This guardian of scientific method. This warrior of the value of empirical information to help identify and confirm a truth.
Ah, but wait a minute, here’s the rest of the quote…
It’s based on modelling, which is extremely hard to do when you’re talking about the climate. Again, our focus is on making sure we have the safest, cleanest air and water.
Any ideas now?
Since it requires an expert in doublespeak to connect those two quotes together, you might be thinking Donald Trump, but we’ll get to him in a minute. No, this was White House spokesperson Sarah Sanders in response to the US government’s own assessment of climate change impact. Here’s just one of the headlines in that report (under the Infrastructure heading):
Our Nation’s aging and deteriorating infrastructure is further stressed by increases in heavy precipitation events, coastal flooding, heat, wildfires, and other extreme events, as well as changes to average precipitation and temperature. Without adaptation, climate change will continue to degrade infrastructure performance over the rest of the century, with the potential for cascading impacts that threaten our economy, national security, essential services, and health and well-being.
I’m sure I don’t need to convince you of the overwhelming statistical and scientific evidence of climate change. But for argument’s sake, let me place here again a graph that I included in a previous post
This is about as data-driven as you can get. Data have been carefully sourced and appropriately combined from locations all across the globe. Confidence intervals have been added – these are the vertical black bars – which account for the fact that we’re estimating a global average on the basis of a limited number of data. But you’ll notice that the confidence bars are smaller for more recent years, since more data of greater reliability is available. So it’s not just data, it’s also careful analysis of data that takes into account that we are estimating something. And it plainly shows that, even after allowance for errors due to data limitation, and also allowance for year-to-year random variation, there has been an upward trend for at least the last 100 years, which is even more pronounced in the last 40 years.
Now, by the way, here’s a summary of the mean annual total of CO2 that’s been released into the atmosphere over roughly the same time period.
Notice any similarities between these two graphs?
Now, as you might remember from my post on Simpson’s Paradox, correlations are not necessarily evidence of causation. It could be, just on the strength of these two graphs, that both CO2 emission and global mean temperature are being affected by some other process, which is causing them both to change in a similar way. But, here’s the thing: there is a proven scientific mechanism by which an increase in CO2 can cause an increase in atmospheric temperature. It’s basically the greenhouse effect: CO2 particles cause heat to be retained in the atmosphere, rather than reflected back into space, as would be the case if those particles weren’t there. So:
The graphs show a clear correlation between C02 levels and mean temperature levels;
CO2 levels in the atmosphere are rising and bound to rise further under current energy polices worldwide;
There is a scientific mechanism by which increased CO2 emissions lead to an increase in mean global temperature.
Put those three things together and you have an incontrovertible case that climate change is happening, that it’s at least partly driven by human activity and that the key to limiting the damaging effects of such change is to introduce energy policies that drastically reduce C02 emissions.
All pretty straightforward, right?
Well, this is the response to his own government’s report by the President of the United States:
In an earlier post you may have discovered that 69 was the Royal Statistical Society’s International Statistic of the Year for 2017. Unless you Googled it, I’d be prepared to bet at very long odds that you didn’t predict that the reason for that choice was Kim Kardashian.
In January of 2017, the ‘American reality television personality, entrepreneur and socialite’ (job description stolen from wikipedia) sent the following tweet.
As you can see, the ’69’ refers to the average number of people in the US killed per year by lawnmowers. The point of the tweet, which contained the single word ‘Statistics’ together with the table, was to contrast this and other mortality rates against the much lower death rate due to islamic terrorism. The context for this comparison is that the tweet was sent at around the same time Donald Trump had invoked the threat of islamic terrorism as a motivation for restricting travel to the US from several muslim countries. And the point is that, statistically speaking, lawnmowers are a considerably greater threat to the life of a US citizen than are islamic terrorists. In other words, yes, terrorism is awful and horrific, but the magnitude of the danger needs to be put in perspective, and not used as a smokescreen for xenophobic border controls.
To be fair, there’s been some discussion in the statistical literature about whether the table in Kim Kardashian’s tweet actually gives a proper assessment of the relative risk of being killed in a terrorist attack compared to other causes. The point seems to be that one can control one’s risk of death by lawnmower by simply not cutting the grass, but there’s little one can do to avoid being the subject of terrorism. Personally, I think that’s taking the depth of Kim’s argument a little too far. The point of the tweet – that the risk of death due to terrorist activity needs to be judged in the context of risk of death from other causes – is surely valid anyway. But if you’re interested, you can read more about the relative risks of lawnmowers and terrorists here.
As a footnote, if I’d been on the judging panel for 2017 International Statistic of the Year, I think I’d have been much more likely to award it to 737, the average number of US citizens who are killed annually through falling out of bed. Or maybe 11,737, the average number of Americans shot dead by other Americans. No, wait, the Smartodds loves Statistics International Statistic of the Year for 2017 goes to… ’21’, the average number of Americans killed annually by armed toddlers.
Ok, I’ll admit, this is a slightly contrived post just to have the excuse to include this image in the blog. However, there’s been a fair bit of speculation about whether Colin Kaepernick will ever play again in the NFL after starting the trend in the NFL of taking a knee during the national anthem in protest against racism in the US. The suspicion is that he is being overlooked in favour weaker quarterbacks by club owners as retaliation for his protest, but does the evidence bear this out? Well, various informal statistical analyses, comparing Kaepernick’s most recent performances against those of other quarterbacks, definitely support this point of view.
Meanwhile, here’s another statistic: Nike sales spiked by around 31% in the days after the Kaepernick ad campaign was released. It would be nice to think that Nike would have ran the Kaepernick campaign even without doing their own market analysis based on surveys and focus groups to collate evidence that it would be viable financially as well as morally. Guess we’ll never know though.