Groundhog day

Fed up of the cold, snow and rain? Don’t worry, spring is forecast to be here earlier than usual. Two caveats though:

  1. ‘Here’ is some unspecified region of the United States, and might not extend as far as the UK;
  2. This prediction was made by a rodent.

Yes, Saturday (February 2nd) was Groundhog Day in the US. And since Punxsutawney Phil failed to see his shadow, spring is forecast to arrive early.

You probably know about Groundhog Day from the Bill Murray movie

… but it’s actually a real event. It’s celebrated in many locations of the US and Canada, though it’s the event in Punxsutawney, Pennsylvania, which has become the most famous, and around which the movie was based. As Wikipedia says:

The Groundhog Day ceremony held at Punxsutawney in western Pennsylvania, centering around a semi-mythical groundhog named Punxsutawney Phil, has become the most attended.

Semi-mythical, no less. If you’d like to know more about Punxsutawney Phil, there’s plenty of information at The Punxsutawney Groundhog Club website, including a dataset of his predictions. These include the entry from 1937 when Phil had an ‘unfortunate meeting with a skunk’. (And whoever said data analysis was boring?)

Anyway, the theory is that if, at 7.30 a.m. on the second of February, Phil the groundhog sees his shadow, there will be six more weeks of winter; if not, spring will arrive early. Now, it seems a little unlikely that a groundhog will have powers of meteorological prediction, but since the legend has persisted, and there is other evidence of animal behaviour serving as a weather predictor,  it seems reasonable to assess the evidence.

Disappointingly, Phil’s success rate is rather low. This article gives it as 39%. I’m not sure if it’s obvious or not, but the article also states (correctly) that if you were to guess randomly, by tossing a coin, say, then your expected chance of guessing correctly is 50%. The reason I say it might not be obvious, is because the chance of spring arriving early is unlikely to be 50%. It might be 40%, say. Yet, randomly guessing with a coin will still have a 50% expected success rate. As such, Phil is doing worse than someone who guesses at random, or via coin tossing.

However, if Phil’s 39% success rate is a genuine measure of his predictive powers – rather than a reflection of the fact that his guesses are also random, and he’s just been a bit unlucky over the years – then he’s still a very useful companion for predictive purposes. You just need to take his predictions, and predict the opposite. That way you’ll have a 61% success rate – rather better than random guessing. Unfortunately, this means you will have to put up with another 6 weeks of winter.

Meantime, if you simply want more Groundhog Day statistics, you can fill your boots here.

And finally, if you think I’m wasting my time on this stuff, check out the Washington Post who have done a geo-spatial analysis of the whole of the United States to colour-map the regions in which Phil has been respectively more and less successful with his predictions over the years.


🤣

Groundhog day

Fed up of the cold, snow and rain? Don’t worry, spring is forecast to be here earlier than usual. Two caveats though:

  1. ‘Here’ is some unspecified region of the United States, and might not extend as far as the UK;
  2. This prediction was made by a rodent.

Yes, Saturday (February 2nd) was Groundhog Day in the US. And since Punxsutawney Phil failed to see his shadow, spring is forecast to arrive early.

You probably know about Groundhog Day from the Bill Murray movie

… but it’s actually a real event. It’s celebrated in many locations of the US and Canada, though it’s the event in Punxsutawney, Pennsylvania, which has become the most famous, and around which the movie was based. As Wikipedia says:

The Groundhog Day ceremony held at Punxsutawney in western Pennsylvania, centering around a semi-mythical groundhog named Punxsutawney Phil, has become the most attended.

Semi-mythical, no less. If you’d like to know more about Punxsutawney Phil, there’s plenty of information at The Punxsutawney Groundhog Club website, including a dataset of his predictions. These include the entry from 1937 when Phil had an ‘unfortunate meeting with a skunk’. (And whoever said data analysis was boring?)

Anyway, the theory is that if, at 7.30 a.m. on the second of February, Phil the groundhog sees his shadow, there will be six more weeks of winter; if not, spring will arrive early. Now, it seems a little unlikely that a groundhog will have powers of meteorological prediction, but since the legend has persisted, and there is other evidence of animal behaviour serving as a weather predictor,  it seems reasonable to assess the evidence.

Disappointingly, Phil’s success rate is rather low. This article gives it as 39%. I’m not sure if it’s obvious or not, but the article also states (correctly) that if you were to guess randomly, by tossing a coin, say, then your expected chance of guessing correctly is 50%. The reason I say it might not be obvious, is because the chance of spring arriving early is unlikely to be 50%. It might be 40%, say. Yet, randomly guessing with a coin will still have a 50% expected success rate. As such, Phil is doing worse than someone who guesses at random, or via coin tossing.

However, if Phil’s 39% success rate is a genuine measure of his predictive powers – rather than a reflection of the fact that his guesses are also random, and he’s just been a bit unlucky over the years – then he’s still a very useful companion for predictive purposes. You just need to take his predictions, and predict the opposite. That way you’ll have a 61% success rate – rather better than random guessing. Unfortunately, this means you will have to put up with another 6 weeks of winter.

Meantime, if you simply want more Groundhog Day statistics, you can fill your boots here.

And finally, if you think I’m wasting my time on this stuff, check out the Washington Post who have done a geo-spatial analysis of the whole of the United States to colour-map the regions in which Phil has been respectively more and less successful with his predictions over the years.

Who wants to win £194,375?

In an earlier post I included a link to Oscar predictions by film critic Mark Kermode over the years, which included 100% success rate across all of the main categories in a couple of years. I also recounted his story of how he failed to make a fortune in 1992 by not knowing about accumulator bets.

Well, it’s almost Oscar season, and fabien.mauroy@smartodds.co.uk pointed me to this article, which includes Mark’s personal shortlist for the coming awards. Now, these aren’t the same as predictions: in some year’s, Mark has listed his own personal favourites as well as what he believes to be the likely winners, and there’s often very little in common. On the other hand, these lists have been produced prior to the nominations, so you’re likely to get better prices on bets now, rather than later. You’ll have to be quick though, as the nominations are announced in a couple of hours.

Anyway, maybe you’d like to sift through Mark’s recommendations, look for hints as to who he thinks the winner is likely to be, and make a bet accordingly. But if you do make a bet based on these lists, here are a few things to take into account:

  1. Please remember the difference between an accumulator bet and single bets;
  2. Please gamble responsibly;
  3. Please don’t blame me if you lose.

If Mark subsequently publishes actual predictions for the Oscars, I’ll include a link to those as well.


Update: the nominations have now been announced and are listed here. Comparing the nominations with Mark Kermode’s own list, the number of nominations which appear in Mark’s personal list for each category are as follows:

Best Picture: 1

Best Director: 2

Best Actor: 1

Best Actress: 2

Best supporting Actor: 3

Best supporting Actress: 1

Best Score: 2

In each case except Best Picture, there are 5 nominations and Mark’s list also comprised 5 contenders. For Best Picture, there are 8 nominations, though Mark only provided 5 suggestions.

So, not much overlap. But again, these weren’t intended to be Mark’s predictions. They were his own choices. I’ll aim to update with Mark’s actual predictions if he publishes them.

Statistics by pictures

Generally speaking there are three main phases to any statistical analysis:

  1. Design;
  2. Execution;
  3. Presentation.

Graphical techniques play an important part in both the second and third phases, but the emphasis is different in each. In the second phase the aim is usually exploratory, using graphical representations of data summaries to hunt for structure and relationships that might subsequently be exploited in a formal statistical model. The graphs here tend to be quick but rough, and are intended more for the statistician than the client.

In the presentation phase the emphasis is a bit different, since the analysis has already been completed, usually involving some sort of statistical model and inference. In this case diagrams are used to highlight the results to clients or a wider audience in a way that illustrates most effectively the salient features of the analysis. Very often the strength of message from a statistical analysis is much more striking when presented graphically rather than in the form of numbers. Moreover, some statisticians have also developed the procedure into something of an art form, using graphical techniques not just to convey the results of the analysis, but also to put them back in the context from where the data derive.

One of my favourite exponents of this technique is Mona Chalabi, who has regular columns in the Guardian. among other places.

Here are a few of her examples:

Most Popular Dog Names in New York

mona_2

A Complete History of the Legislation of Same-Sex Marriage 

mona4

The Most Pirated Christmas Movies

mona_1

And last and almost certainly least…

Untitled

mona5

Tell you what though… that looks a bit more than 16% to me, suggesting a rather excessive use of artistic licence in this particular case.

How to be wrong

When I’m not feeling too fragile to be able to handle it, I sometimes listen to James O’Brien on LBC. As you probably know, he hosts a talk show in which he invites listeners to discuss their views on a wide range of topics, that often begin and end with Brexit. His usual approach is simply to ask people who call in to defend or support their views with hard facts – as opposed to opinion or hearsay – and inevitably they can’t. James himself is well-armed with facts and knowledge, and is consequently able to forensically dissect arguments that are dressed up as factual, but turn out to be anything but. It’s simultaneously inspiring and incredibly depressing.

He’s also just published a book, which is a great read:

 

This is the description on Amazon:

Every day, James O’Brien listens to people blaming benefits scroungers, the EU, Muslims, feminists and immigrants. But what makes James’s daily LBC show such essential listening – and has made James a standout social media star – is the careful way he punctures their assumptions and dismantles their arguments live on air, every single morning.

In the bestselling How To Be Right, James provides a hilarious and invigorating guide to talking to people with faulty opinions. With chapters on every lightning-rod issue, James shows how people have been fooled into thinking the way they do, and in each case outlines the key questions to ask to reveal fallacies, inconsistencies and double standards.

If you ever get cornered by ardent Brexiteers, Daily Mail disciples or little England patriots, this book is your conversation survival guide.

And this is the Sun review on the cover:

James O’Brien is the epitome of a smug, sanctimonious, condescending, obsessively politically-correct, champagne-socialist public schoolboy Remoaner.

Obviously, both these opinions should give you the encouragement you need to read the book. Admittedly, it’s only tenuously related to Statistics, but the emphasis on the importance of fact and evidence is a common theme.

But I don’t want to talk about being right. I want to talk about being wrong.

One of my first tasks when I joined Smartodds around 14 years ago was to develop an alternative model to the standard goals model for football. I made a fairly simple suggestion, and we coded it up to run live in parallel to the goals model. We kept it going for a year or so, but rather than being an improvement on the goals model, it tended to give poorer results. This was disappointing, so I looked into things and came up with a ‘proof’ of how, in idealised circumstances, it was impossible for the new model to improve on the goals model. Admittedly, our goals model didn’t quite have the idealised form, so it wasn’t a complete surprise that the numbers were a bit different. But the argument seemed to suggest anyway that we shouldn’t really expect any improvement, and since we weren’t getting very good results anyway, we were happy to bury the new model on the strength of this slightly idealised theoretical argument.

Fast-forward 14 years… Some bright sparks in the RnD team have been experimenting with models that have similar structure to the one which I’d proved couldn’t really work and which we’d previously abandoned. And they’ve been getting quite good results, that seem to be an improvement on the performance of the original goals model. At first I thought it might just be that the new models were so different to the one I’d previously suggested, that my arguments about the model not being able to improve on the goals model might not be valid. But when I looked at things more closely, I realised that there was a flaw in my original argument. It wasn’t wrong exactly, but it didn’t apply to the versions of the model we were likely to use in practice.

Of course, this is good and bad news. It’s good news that there’s no reason why the new versions of the model shouldn’t improve on the goals model. It’s bad news that if we’d understood that 14 years ago, we might have explored this avenue of research sooner. I should emphasise, it might be that this type of model still ends up not improving on our original goals model; it’s just that whereas I thought there was a theoretical argument which suggested that was unlikely, this argument actually doesn’t hold true.

So what’s the point of this post?

Well, all of us are wrong sometimes. And in the world of Statistics, we’re probably wrong more often than most people, and sometimes for good reasons. It might be:

  • We were unlucky in the data we used. They suggested something, but it turned out to be just due to chance.
  • Something changed. We correctly spotted something in some data, but subsequent to that things changed, and what we’d previously spotted no longer applies.
  • The data themselves were incomplete or unreliable.

Or it might be for not-such-good reasons:

  • We made a mistake in the modelling.
  • We made a mistake in the programming.

Or, just maybe, someone was careless when applying a simple mathematical identity in a situation for which it wasn’t really appropriate. Anyway, mistakes are inevitable, so here’s a handy guide about how to be wrong:

  1. Try very hard not to be wrong.
  2. Realise that, despite trying very hard, you might be wrong in any situation, so be constantly aware as new evidence becomes available that you may need to modify what you believed to be true.
  3. Once you realise you are wrong, let others know what was wrong and why you made the mistake you did. Humility and honesty is way more useful than evasiveness.
  4. Be aware that other people may be wrong too. Always use other people’s work with an element of caution, and if something seems wrong, politely discuss the possibility with them. (But remember also: you may be wrong about them being wrong).

Hmmm, hope that’s right.


I was encouraged to write a post along these lines by Luigi.Colombo@smartodds.co.uk following a recent  chat where we were discussing the mistake I’d made as explained above. To help me not feel quite so bad about it, he mentioned a recent blog post where some of the research described in Daniel Kahneman’s book, ‘Thinking, Fast and Slow’, is also shown to be unreliable. You might remember I discussed this book briefly in a previous post. Anyway, the essence of that post is that the sample sizes used in much of the reported research are too small for the statistical conclusions reached to be valid. As such, some chapters from Kahneman’s book have to be considered unreliable. Actually, Kahneman himself seems to have been aware of the problem some years ago, writing an open letter to relevant researchers, setting out a possible protocol that would avoid the sorts of problems that occurred in the research on which his book chapters were based. However, while Kahneman himself can’t be blamed for the original failures in the research that he reported on, it’s argued in the blog post that his own earlier research might well have led him to foresee these types of problems. Hence, the rather aggressive tone of his letter seems to me like an attempt at ring-fencing himself from any particular blame for the errors in his book. In other words, this episode seems like a slightly different approach to ‘how to be wrong’ compared with my handy guide above.

I just made up this one

I saw this the other day…

And the same day I saw this…

One of these items is a cartoon character inventing a statistic just to support an argument that he can’t justify by logic or other means.

The other one is Dilbert.

I don’t go for stats

mou

I’ve mentioned in previous posts that an analysis of the detailed statistics from a game can provide a deeper understanding of team performance than just the final result. This point of view is increasingly shared and understood in the football world, but there are some areas of resistance. Here‘s Mourinho after yesterday’s 3-1 defeat of Man United against Man City:

The way people who don’t understand football analyse it with stats. I don’t go for stats. I go for what I felt in the game and it was there until minute 80-something. I consider the performance of my team one with mistakes. It is different from a bad performance.

And here are the stats that he doesn’t go for:

Of course, there’s a fair point to be made: statistics don’t tell the whole story, and it’s always important, wherever possible, to balance the information that they provide with the kind of information you get from an expert watching a game. Equally though, it has to be a missed opportunity not to take any account of the information that is contained in statistics. Or maybe Mourinho is such a total expert that statistics are completely irrelevant compared to his ‘feel for the game’.

Except, oh, wait a minute: ‘Jose Mourinho brings statistics to press conference to silence Marcus Rashford claims‘. Hmmm.

So sad about the leopards

At the recent offsite, Nity.Raj@smartodds.co.uk suggested I do a post on the statistics of climate change. I will do that properly at some point, but there’s such an enormous amount of material to choose from, that I don’t really know where to start or how best to turn it into the “snappy and informative, but fun and light-hearted” type of post that you’ve come to expect from Smartodds loves Statistics.

So, in the meantime, I’ll just drop the following cartoon, made by First Dog on the Moon, who has a regular series in the Guardian. It’s not exactly about climate science, but similar in that it points at humanity’s failures to face up to and confront the effects we are having on our planet, despite the overwhelming statistical and scientific evidence of both the effect and its consequences. It specifically refers to the recent WWF report which confirms, amongst other things, that humanity has wiped out 60% of the world’s animal population since 1970.

Responding to the report, the Guardian quotes Prof Johan Rockström, a global sustainability expert at the Potsdam Institute for Climate Impact Research in Germany, as follows:

We are rapidly running out of time. Only by addressing both ecosystems and climate do we stand a chance of safeguarding a stable planet for humanity’s future on Earth.

Remember, kids: “Listen to the scientists and not the Nazis”.

Lewis Hamilton

hamilton.jpg

Congratulations to Lewis Hamilton on his 5th world championship. He now equals the number of championship wins by Juan Manuel Fangio, but remains behind Michael Schumacher, who won 7.

I hadn’t planned to do a post on this, but got hooked by this article in the Guardian, which I recommend you read. It’s a kind of celebration of Lewis Hamilton’s achievement, but it’s also a critique of the way statistics are used when assessing performance in sports, summarised by this excerpt:

So, statistics are fun but they do not tell the whole story

The specific point being made by the author is that while you can use statistics to compare win rates and other measures of performance of racers from one era with those of another, the statistics themselves don’t take any account of changes in circumstances. In the case of Formula 1, that includes huge changes in levels of safety standards, as well as extraordinary technological improvements in the cars themselves. So, is Lewis Hamilton a better driver than either Michael Schumacher or Juan Manuel Fangio? And who was the better of those two? You can make an argument based on most statistics for any of them, but that simple approach fails to take the development of the sport into account. As the Guardian article explains about the statistics:

They do not describe the conditions in which Fangio raced, in death-trap cars on circuits lined with trees, ditches and houses, wearing highly flammable cotton shirts and trousers and eggshell helmets made of layers of linen soaked in shellac.

Shellac!

Similar arguments apply to other sports as well: Maradona or Lionel Messi?; Rod Laver or Roger Federer?; Jack Nicklaus or Tiger Woods? It’s easy to compare the statistics and pick a winner, but as with Formula 1, the statistics don’t take account of changes in circumstance, which can be massive in some cases.

Anyway, the point applies equally well to the data that go into our models. They are just that: data.  Once reduced to a number all context disappears (other than the context that’s contained in other data). And though, over many fixtures/races you might hope that the variations in context balance out, so that it’s reasonable to rely on models that are driven entirely by data, that won’t always be the case.

Time to kill?

dayligth_saving

Smartodds loves Statistics would like to remind you that the clocks go back an hour this weekend.

You probably heard that the EU is planning to end the practice of switching between ‘summer’ and ‘winter’ times, in which clocks are artificially moved back and forward by an hour at the end of October and March respectively. The rationale for this procedure of so-called daylight saving is closely linked to historical social, agricultural and industrial demands on energy supplies, but what was relevant a century ago, when the practice was first devised, is rather less relevant today.

Some media stories also suggest that putting an end to daylight saving is rather more urgent. For example: “Daylight Savings Time Literally Kills People“. Or even more dramatically: “Why Daylight Saving Time will Kill us All“.

In part there is some basis to these stories. Messing slightly with people’s regular sleep patterns can induce extra tiredness, and there is some evidence that over an entire population this can lead to an increase in the number of  driving-related and other accidental deaths. The effect is very slight though, and really says more about the effect of sleep-deprivation on accidental deaths than it does about daylight saving per se.

Rather more surprising and intriguing though is an apparent increase in the rate of heart attacks on the day after clocks go forward an hour in March, with a similar decrease on the day after they go back in October.  A recent study published by the British Medical Journal found that there was a 24% increase in patients presenting for acute heart attacks on the day after clocks go forward, and a 21% decrease on the day after clocks go back. This was based on a study of many patients over several years, and so the differences are too big just to have occurred by chance. So what’s going on? Does daylight saving give people heart attacks?

Well, the first thing a statistician will do is look for other factors which might explain the results. For example:

  1. Since clocks always change early on a Sunday morning, are Sunday, or maybe Monday, generally different from other days of the week in terms of heart attack rates, regardless of the clock change effect?
  2. Are there more heart attacks generally at some times of the year compared to others?

The answer to both these questions is yes, but in the analysis reported by the BMJ both of these effects, and others, were accounted for, so the unusual increases and decreases following daylight saving time changes are after such allowances have been made. So again, what’s going on? Does moving the clocks induce heart attacks?

Well, not really. When the researchers of the BMJ study counted the number of patients attending hospital with heart attacks within the entire week following a change in daylight saving, rather than just the next day, then they found no difference at all following  the time change in March or October. Perhaps for physiological or social reasons, heart attacks appear to be slightly delayed – on average – after the change in October, and sped up after the change in March. So if you look only at the days immediately following the change, it does look like the change itself is changing the rate of heart attacks. But over a slightly longer window of a week or so, there’s no evidence of a change at all.

In summary, moving the clocks forward or backwards won’t induce anyone to have a heart attack who wasn’t going to have one anyway; the change might just cause someone’s heart attack to occur slightly earlier or later in the same week.

There seem to be two useful messages from this:

  1. As with Simpson’s paradox, we see the danger of simply carrying out a statistical analysis without taking into account the context. Testing the daily data for whether is a change in heart attack rates when clocks are changed suggests there is an effect. But understanding the context of the problem and looking at the data over a slightly longer timespan indicates that there is no real change.
  2. The media are often just interested in a good story, and won’t let concerns about the quality of a statistical analysis get in the way of that.

I stole most of this material from Matt Parker, who describes himself as a standup mathematician. (I know!) Anyway, if you’re interested, here’s his take on the issue: