No smoke without fire

No one seriously now doubts that cigarette smoking increases your risk of lung cancer and many other diseases, but when the evidence for a relationship between smoking and cancer was first presented in the 1950’s, it was strongly challenged by the tobacco industry.

The history of the scientific fight to demonstrate the harmful effects of smoking is summarised in this article. One difficulty from a statistical point of view was that the primary evidence based on retrospective studies was shaky, because smokers tend to give unreliable reports on how much they smoke. Smokers with illnesses tend to overstate how much they smoke; those who are healthy tend to understate their cigarette consumption. And these two effects lead to misleading analyses of historically collected data.

An additional problem was the difficulty of establishing causal relationships from statistical associations. Similar to the examples in a previous post, just because there’s a correlation between smoking and cancer, it doesn’t necessarily mean that smoking is a risk factor for cancer. Indeed, one of the most prominent statisticians of the time – actually of any time – Sir Ronald Fisher, wrote various scientific articles explaining how the correlations observed between smoking and cancer rates could easily be explained by the presents of lurking variables that induce spurious correlations.

At which point it’s worth noting a couple more ‘coincidences’: Fisher was a heavy smoker himself and also an advisor to the Tobacco Manufacturers Standing Committee. In other words, he wasn’t exactly neutral on the matter. But, he was a highly respected scientist, and therefore his scepticism carried considerable weight.

Eventually though, the sheer weight of evidence – including that from long-term prospective studies – was simply too overwhelming to be ignored, and governments fell into line with the scientific community in accepting that smoking is a high risk factor for various types of cancer.

An important milestone in that process was the work of another British statistician, Austin Bradford Hill. As well as being involved in several of the most prominent cases studies linking cancer to smoking, he also developed a set of 9 (later extended to 10) criteria for establishing a causal relationship between processes. Though still only guidelines, they provided a framework that is still used today for determining whether associated processes include any causal relationships. And by these criteria, smoking was clearly shown to be a risk factor for smoking.

Now, fast-forward to today and there’s a similar debate about global warming:

  1. Is the planet genuinely heating up or is it just random variation in temperatures?
  2. If it’s heating up, is it a consequence of human activity, or just part of the natural evolution of the planet?
  3. And then what are the consequences for the various bio- and eco-systems living on it?

There are correlations all over the place – for example between CO2 emissions and average global temperatures as described in an earlier post – but could these possibly just be spurious and not indicative of any causal relationships?  Certainly there are industries with vested interests who would like to shroud the arguments in doubt. Well, this nice article applies each of Bradford Hill’s criteria to various aspects of climate science data and establishes that the increases in global temperatures are undoubtedly caused by human activity leading to CO2 release in the atmosphere, and that many observable changes to biological and geographical systems are a knock-on effect of this relationship.

In summary: in the case of the planet, the smoke that we see <global warming> is definitely a consequence of the fire we stared <the increased amounts of CO2 released into the atmosphere>.

Walking on water

Here’s a question: how do you get dogs to walk on water?

Turns out there’s a really simple answer – just heat the atmosphere up by burning fossil fuels so much that the Greenland ice sheets melt.

The remarkable picture above was taken by a member of the Centre for Ocean and Ice at the Danish Meteorological Institute. Their pre-summer retrieval of research equipment is normally a sledge ride across a frozen winter wasteland; this year it was a paddle through the ocean that’s sitting on what’s left of the ice. And the husky dogs that pull the sledge are literally walking on water.

This graph shows the extent – please note: clever play on words – of the problem…

The blue curve shows the median percentage of Greenland ice melt over the last few decades. There’s natural year-to-year variation around that average, and as with any statistical analysis, it’s important to understand what types of variations are normal before deciding whether any particular observation is unusual or not. So, in this case, the dark grey area shows the range of values were observed in 50% of years; the light grey area is what was observed in 90% of years. So, you’d only expect observations outside the light grey area once every ten years. Moreover, the further an observation falls outside of the grey area, the more anomalous it is.

Now, look at the trace for 2019 shown in red. The value for June isn’t just outside the normal range of variation, it’s way outside. And it’s not only an unusually extreme observation for June; it would be extreme for the hottest part of the year in July. At it’s worst (so far), the melt for June 2019 reached over 40%, whereas the average in mid-July is around 18%, with a value of about 35% being exceeded only once in every 10 years.

So, note how much information can be extracted from a single well-designed graph. We can see:

  1. The variation across the calendar of the average ice melt;
  2. The typical variation around the average – again across the calendar – in terms of an interval expected to contain the true value on 50% of occasions: the so-called inter-quartile range;
  3. A more extreme measure of variation, showing the levels that are exceeded only once every 10 years: the so-called inter-decile range;
  4. The trace of an individual year – up to current date – which appears anomalous.

In particular, by showing us the variation in ice melt both within years and across years we were able to conclude that this year’s June value is truly anomalous.

Now let’s look at another graph. These are average spring temperatures, not for Greenland but for Alaska, where there are similar concerns about ice melt caused by increased atmospheric temperatures.

alaska

Again, there’s a lot of information:

  1. Each dot is an average spring temperature, one per year;
  2. The dots have been coloured: most are black, but the blue and red ones correspond to the ten coldest and hottest years respectively;
  3. The green curve shows the overall trend;
  4. The value for 2019 has been individually identified.

And the picture is clear. Not only has the overall trend been increasing since around the mid-seventies, but almost all of the hottest years have occurred in that period, while almost none of the coldest have. In other words, the average spring temperature in Alaska has been increasing over the last 50 years or so, and is hotter now than it has been for at least 90 years (and probably much longer).

Now, you don’t need to be a genius in biophysics to understand the cause and effect relating temperature and ice. So the fact that extreme ice melts are occurring in the same period as extreme temperatures is hardly surprising. What’s maybe less well-known is that the impact of these changes has a knock-on effect way beyond the confines of the Arctic.

So, even if dogs walking on the water of the arctic oceans seems like a remote problem, it’s part of a chain of catastrophic effects that will soon affect our lives too. Statistics has an important role to play in determining and communicating the presence and cause of these effects, and the better we all are at understanding those statistics, the more likely we will be able to limit the damage that is already inevitable. Fortunately, our governments are well aware of this and are taking immediate actions to remedy the problem.

Oh, wait…

… scrap that, better take action ourselves.

A bad weekend

Had a bad weekend? Maybe your team faded against relegated-months-ago Huddersfield Town, consigning your flickering hopes of a Champions League qualification spot to the wastebin. Or maybe you support Arsenal.

Anyway, Smartodds loves Statistics is here to help you put things in perspective: ‘We are in trouble‘. But not trouble in the sense of having to play Europa League qualifiers on a Thursday night. Trouble in the sense that…

Human society is under urgent threat from loss of Earth’s natural life

Yes, deep shit trouble.

This is according to a Global Assessment report by the United Nations, based on work by hundreds of scientists who compiled as many as 15,000 academic studies. Here are some of the headline statistics:

  • Nature is being destroyed at a rate of tens to hundreds of times greater than the average over the last 10 million years;
  • The biomass of wild mammals has fallen by 82%;
  • Natural ecosystems have lost around half of their area;
  • A million species are at risk of extinction;
  • Pollinator loss has put up to £440 billion of crop output at risk;

The report goes on to say:

The knock-on impacts on humankind, including freshwater shortages and climate instability, are already “ominous” and will worsen without drastic remedial action.

But if only we could work out what the cause of all this is. Oh, hang on, the report says it’s…

… all largely as a result of human actions.

For example, actions like these:

  • Land degradation has reduced the productivity of 23% of global land;
  • Wetlands have drained by 83% since 1700;
  • In the years 2000-2013 the area of intact forest fell by 7% – an area the size of France and the UK combined;
  • More than 80% of wastewater, as well as 300-400m tons of industrial waste, is pumped back into natural water reserves without treatment;
  • Plastic waste is a factor of tens greater than in 1980, affecting 86% of marine turtles, 44% of seabirds and 43% of marine animals.
  • Fertiliser run-off has created 400 dead zones – an area the size of the UK.

You probably don’t need to be a bioscientist and certainly not a statistician to realise none of this is particularly good news. However, the report goes on to list various strategies that agencies, governments and countries need to adopt in order to mitigate against the damage that has already been done and minimise the further damage that will unavoidably be done under current regimes.  But none of it’s easy, and evidence so far is not in favour of collective human will to accept the responsibilities involved.

Josef Settele of the Helmholtz Centre for Environmental Research in Germany said

People shouldn’t panic, but they should begin drastic change. Business as usual with small adjustments won’t be enough.

So, yes, cry all you like about Liverpool’s crumbling hopes for a miracle against Barcelona tonight, but keep it in perspective and maybe even contribute to the wider task of saving humanity from itself.

<End of rant. Enjoy tonight’s game.>


Correction: *Bareclona’s* crumbling hopes

Taking things to extremes

One of the themes I’ve tried to develop in this blog is the connectedness of Statistics. Many things which seem unrelated, turn out to be strongly related at some fundamental level.

Last week I posted the solution to a probability puzzle that I’d posted previously. Several respondents to the puzzle, including Olga.Turetskaya@smartodds.co.uk, included the logic they’d used to get to their answer when writing to me. Like the others, Olga explained that she’d basically halved the number of coins in each round, till getting down to (roughly) a single coin. As I explained in last week’s post, this strategy leads to an answer that is very close to the true answer.

Anyway, Olga followed up her reply with a question: if we repeated the coin tossing puzzle many, many times, and plotted a histogram of the results – a graph which shows the frequencies of the numbers of rounds needed in each repetition – would the result be the typical ‘bell-shaped’ graph that we often find in Statistics, with the true average sitting somewhere in the middle?

Now, just to be precise, the bell-shaped curve that Olga was referring to is the so-called Normal distribution curve, that is indeed often found to be appropriate in statistical analyses, and which I discussed in another previous post. To answer Olga, I did a quick simulation of the problem, starting with both 10 and 100 coins. These are the histograms of the results.

So, as you’d expect, the average values (4.726 and 7.983 respectively) do indeed sit nicely inside the respective distributions. But, the distributions don’t look at all bell-shaped – they are heavily skewed to the right. And this means that the averages are closer to the lower end than the top end. But what is it about this example that leads to the distributions not having the usual bell-shape?

Well, the normal distribution often arises when you take averages of something. For example, if we took samples of people and measured their average height, a histogram of the results is likely to have the bell-shaped form. But in my solution to the coin tossing problem, I explained that one way to think about this puzzle is that the number of rounds needed till all coins are removed is the maximum of the number of rounds required by each of the individual coins. For example, if we started with 3 coins, and the number of rounds for each coin to show heads for the first time was 1, 4 and 3 respectively, then I’d have had to play the game for 4 rounds before all of the coins had shown a Head. And it turns out that the shape of distributions you get by taking maxima is different from what you get by taking averages. In particular, it’s not bell-shaped.

But is this ever useful in practice? Well, the Normal bell-shaped curve is somehow the centrepiece of Statistics, because averaging, in one way or another, is fundamental in many physical processes and also in many statistical operations. And in general circumstances, averaging will lead to the Normal bell-shaped curve.

Consider this though. Suppose you have to design a coastal wall to offer protection against sea levels. Do you care what the average sea level will be? Or you have to design a building to withstand the effects of wind. Again, do you care about average winds? Almost certainly not. What you really care about in each case will be extremely large values of the process: high sea-levels in one case; strong winds in the other. So you’ll be looking through your data to find the maximum values – perhaps the maximum per year – and designing your structures to withstand what you think the most likely extreme values of that process will be.

This takes us into an area of statistics called extreme value theory. And just as the Normal distribution is used as a template because it’s mathematically proven to approximate the behaviour of averages, so there are equivalent distributions that apply as templates for maxima. And what we’re seeing in the above graphs – precisely because the data are derived as maxima – are examples of this type. So, we don’t see the Normal bell-shaped curve, but we do see shapes that resemble the templates that are used for modelling things like extreme sea levels or wind speeds.

So, our discussion of techniques for solving a simple probability puzzle with coins, leads us into the field of extreme value statistics and its application to problems of environmental engineering.

But has this got anything to do with sports modelling? Well, the argument about taking the maximum of some process applies equally well if you take the minimum. And, for example, the winner of an athletics race will be the competitor with the fastest – or minimum – race time. Therefore the models that derive from extreme value theory are suitable templates for modelling athletic race times.

So, we moved from coin tossing to statistics for extreme weather conditions to the modelling of race times in athletics, all in a blog post of less than 1000 words.

Everything’s connected and Statistics is a very small world really.

Here’s to all the money at the end of the world

I made the point in last week’s Valentine’s Day post, that although the emphasis of this blog is about the methodology of using Statistics to understand the world through the analysis of data, it’s often the case that statistics in themselves tell their own story. In this way we learnt that a good proportion of the population of the UK buy their pets presents for Valentine’s Day.

As if that wasn’t bad enough, I now have to report to you the statistical evidence for the fact that nature itself is dying. Or as the Guardian puts it:

Plummeting insect numbers `threaten collapse of nature’

The statistical and scientific evidence now points to the fact that, at current rates of decline, all insects could be extinct by the end of the century. Admittedly, it’s probably not great science or statistics to extrapolate the current annual loss of 2.5% in that way, but nevertheless it gives you a picture of the way things are going. This projected elimination of insects would be, by some definitions, the sixth mass extinction event on earth. (Earlier versions wiped out dinosaurs and so on).

And before you go all Donald Trump, and say ‘bring it on: mosquito-free holidays’, you need to remember that life on earth is a complex ecological system in which the big things (including humans) are indirectly dependent on the little things (including insects) via complex bio-mechanisms for mutual survival. So if all the insects go, all the humans go too. And this is by the end of the century, remember.

Here’s First Dog on the Moon’s take on it:

So, yeah, let’s do our best to make money for our clients. But let’s also not forget that money only has value if we have a world to spend it in, and use Statistics and all other means at our disposal to fight for the survival of our planet and all the species that live on it.

It’s not based on facts

We think that this is the most extreme version and it’s not based on facts. It’s not data-driven. We’d like to see something that is more data-driven.

Wow! Who is this staunch defender of statistical methodology? This guardian of scientific method. This warrior of the value of empirical information to help identify and confirm a truth.

Ah, but wait a minute, here’s the rest of the quote…

It’s based on modelling, which is extremely hard to do when you’re talking about the climate. Again, our focus is on making sure we have the safest, cleanest air and water.

Any ideas now?

Since it requires an expert in doublespeak to connect those two quotes together, you might be thinking Donald Trump, but we’ll get to him in a minute. No, this was White House spokesperson Sarah Sanders in response to the US government’s own assessment of climate change impact. Here’s just one of the headlines in that report (under the Infrastructure heading):

Our Nation’s aging and deteriorating infrastructure is further stressed by increases in heavy precipitation events, coastal flooding, heat, wildfires, and other extreme events, as well as changes to average precipitation and temperature. Without adaptation, climate change will continue to degrade infrastructure performance over the rest of the century, with the potential for cascading impacts that threaten our economy, national security, essential services, and health and well-being.

I’m sure I don’t need to convince you of the overwhelming statistical and scientific evidence of climate change. But for argument’s sake, let me place here again a graph that I included in a previous post

This is about as data-driven as you can get. Data have been carefully sourced and appropriately combined from locations all across the globe. Confidence intervals have been added – these are the vertical black bars – which account for the fact that we’re estimating a global average on the basis of a limited number of data. But you’ll notice that the confidence bars are smaller for more recent years, since more data of greater reliability is available. So it’s not just data, it’s also careful analysis of data that takes into account that we are estimating something. And it plainly shows that, even after allowance for errors due to data limitation, and also allowance for year-to-year random variation, there has been an upward trend for at least the last 100 years,  which is even more pronounced in the last 40 years.

Now, by the way, here’s a summary of the mean annual total of CO2 that’s been released into the atmosphere over roughly the same time period.

Notice any similarities between these two graphs?

Now, as you might remember from my post on Simpson’s Paradox, correlations are not necessarily evidence of causation. It could be, just on the strength of these two graphs, that both CO2 emission and global mean temperature are being affected by some other process, which is causing them both to change in a similar way. But, here’s the thing: there is a proven scientific mechanism by which an increase in CO2 can cause an increase in atmospheric temperature. It’s basically the greenhouse effect: CO2 particles cause heat to be retained in the atmosphere, rather than reflected back into space, as would be the case if those particles weren’t there. So:

  1. The graphs show a clear correlation between C02 levels and mean temperature levels;
  2. CO2 levels in the atmosphere are rising and bound to rise further under current energy polices worldwide;
  3. There is a scientific mechanism by which increased CO2 emissions lead to an increase in mean global temperature.

Put those three things together and you have an incontrovertible case that climate change is happening, that it’s at least partly driven by human activity and that the key to limiting the damaging effects of such change is to introduce energy policies that drastically reduce C02 emissions.

All pretty straightforward, right?

Well, this is the response to his own government’s report by the President of the United States:

In summary:

I don’t believe it

And the evidence for that disbelief:

One of the problems that a lot of people like myself — we have very high levels of intelligence, but we’re not necessarily such believers.

If only the President of the United States was just a little less intelligent. And if only his White House spokesperson wasn’t such an out-and-out liar.