Not so clever

You remember that thing about well-produced statistical diagrams telling their own story without the need for additional words?

Well, the same thing goes for badly produced statistical diagrams:

Thanks to for giving me this idea for a post.

No smoke without fire

No one seriously now doubts that cigarette smoking increases your risk of lung cancer and many other diseases, but when the evidence for a relationship between smoking and cancer was first presented in the 1950’s, it was strongly challenged by the tobacco industry.

The history of the scientific fight to demonstrate the harmful effects of smoking is summarised in this article. One difficulty from a statistical point of view was that the primary evidence based on retrospective studies was shaky, because smokers tend to give unreliable reports on how much they smoke. Smokers with illnesses tend to overstate how much they smoke; those who are healthy tend to understate their cigarette consumption. And these two effects lead to misleading analyses of historically collected data.

An additional problem was the difficulty of establishing causal relationships from statistical associations. Similar to the examples in a previous post, just because there’s a correlation between smoking and cancer, it doesn’t necessarily mean that smoking is a risk factor for cancer. Indeed, one of the most prominent statisticians of the time – actually of any time – Sir Ronald Fisher, wrote various scientific articles explaining how the correlations observed between smoking and cancer rates could easily be explained by the presents of lurking variables that induce spurious correlations.

At which point it’s worth noting a couple more ‘coincidences’: Fisher was a heavy smoker himself and also an advisor to the Tobacco Manufacturers Standing Committee. In other words, he wasn’t exactly neutral on the matter. But, he was a highly respected scientist, and therefore his scepticism carried considerable weight.

Eventually though, the sheer weight of evidence – including that from long-term prospective studies – was simply too overwhelming to be ignored, and governments fell into line with the scientific community in accepting that smoking is a high risk factor for various types of cancer.

An important milestone in that process was the work of another British statistician, Austin Bradford Hill. As well as being involved in several of the most prominent cases studies linking cancer to smoking, he also developed a set of 9 (later extended to 10) criteria for establishing a causal relationship between processes. Though still only guidelines, they provided a framework that is still used today for determining whether associated processes include any causal relationships. And by these criteria, smoking was clearly shown to be a risk factor for smoking.

Now, fast-forward to today and there’s a similar debate about global warming:

  1. Is the planet genuinely heating up or is it just random variation in temperatures?
  2. If it’s heating up, is it a consequence of human activity, or just part of the natural evolution of the planet?
  3. And then what are the consequences for the various bio- and eco-systems living on it?

There are correlations all over the place – for example between CO2 emissions and average global temperatures as described in an earlier post – but could these possibly just be spurious and not indicative of any causal relationships?  Certainly there are industries with vested interests who would like to shroud the arguments in doubt. Well, this nice article applies each of Bradford Hill’s criteria to various aspects of climate science data and establishes that the increases in global temperatures are undoubtedly caused by human activity leading to CO2 release in the atmosphere, and that many observable changes to biological and geographical systems are a knock-on effect of this relationship.

In summary: in the case of the planet, the smoke that we see <global warming> is definitely a consequence of the fire we stared <the increased amounts of CO2 released into the atmosphere>.

Massively increase your bonus

In one of the earliest posts to the blog last year I set a puzzle where I suggested Smartodds were offering employees the chance of increasing their bonus, and you had to decide whether it was in their interests to accept the offer or not.

<They weren’t, and they still aren’t, but let’s play along>.

Same thing this year, but the rules are different. Eligible employees are invited to gamble their bonus at odds of 10-1 based on the outcome of a game. It works like this…

For argument’s sake, let’s suppose there are 100 employees that are entitled to a bonus. They are told they each have the opportunity to increase their bonus by a factor of 10 by playing the following game:

  • Each of the employees is randomly assigned a number between 1 and 100.
  • Inside a room there are 100 boxes, also labelled 1 to 100.
  • 100 cards, numbered individually from 1 to 100, have been randomly placed inside the boxes, so each numbered box contains a card with a unique random number from 1 to 100. For example, box number 1 might contain the card with number 62; box number 2 might contain the card with number 25; and so on.
  • Each employee must enter the room, one a a time, and can choose any 50 of the boxes to open. If they find the card with their own number in one of those boxes, they win. Otherwise they lose.
  • Though the employees may discuss the game and decide how they will play before they enter the room, they must not convey any information to the other employees after taking their turn.
  • The employees cannot rearrange any of the boxes or the cards – so everyone finds the room in the same state when they enter.
  • The employees will have their bonus multiplied by 10 if all 100 of them are winners. If there is a single loser, they all end up with zero bonus.

Should the employees accept this game, or should they refuse it and keep their original bonuses? And if they accept to play, should they adopt any particular strategy for playing the game?

Give it some thought and then scroll down for some discussion.


A good place to start is to calculate the probability that any one employee is a winner. This happens if one of the 50 boxes they open, out of the 100 available, contains the card with their number. Each box is equally likely to contain their number, so you can easily write down the probability that they win. Scroll down again for the answer to this part:


There are 100 boxes, and the employee selects 50 of them. Each box is equally likely to contain their number, so the probability they find their number in one of the boxes is 50/100 or 1/2.

So that’s the probability that any one employee wins. We now need to calculate the probability that they all win – bearing in mind the rules of the game – and then decide whether the bet is worth taking.

In summary:

  • There are 100 employees;
  • The probability that any one employee wins their game is 1/2;
  • If they all win, their bonuses will all be multiplied by 10;
  • If any one of them loses, they all get zero bonus.

Should the employees choose to play or to keep their original bonus? And if they play, is there any particular strategy they should adopt?

If you’d like to send me your answers I’d be really happy to hear from you. If you prefer just to send me a yes/no answer, perhaps just based on your own intuition, I’d be equally happy to get your response, and you can use this form to send the answer in that case.

This is a variant on a puzzle pointed out to me by I think it’s a little more tricky than previous puzzles I’ve posted, but it illustrates a specific important statistical issue that I’ll discuss when giving the solution.

Cause and effect


If you don’t see why this cartoon is funny, hopefully you will by the end of this post.

The following graph shows the volume of crude oil imports from Norway to the US and the number of drivers killed in collisions with trains, each per year:

There is clearly a very strong similarity between the two graphs. To determine the strength of similarity the standard way of measuring statistical association between two series is with the correlation coefficient. If the two series were completely unrelated the correlation coefficient would be zero. If they were perfectly in synch it would be 1. For the two series in the graph the correlation coefficient is 0.95, which is pretty close to 1.  So, you’d conclude that crude oil imports and deaths due to train collisions are strongly associated with one another – as one goes up, so does the other, and vice versa.

But this is crazy. How can oil imports and train deaths possibly be related?

This is just one of a number of examples of spurious correlations kindly sent to me by Other examples there include:

  1. The number of deaths by drowning and the number of films Nicolas Cage has appeared in;
  2. Cheese consumption and the number of deaths by entanglement in bedsheets;
  3. Divorce rates and consumption of margarine.

In each case, like in the example above, the correlation coefficient is very close to 1. But equally in each case, it’s absurd to think that there could be any genuine connection between the processes, regardless of what statistics might say. So, what’s going on? Is Statistics wrong?

No, Statistics is never wrong, but the way it’s used and interpreted often is.

There are two possible explanations for spurious correlations like the one observed above:

  1. The processes might be genuinely correlated, but not due to any causal relationship. Instead, both might be affected in the same way by some other variable lurking in the background. Wikipedia gives the example of ice cream sales being correlated with deaths by drowning. Neither causes the other, but they tend to be simultaneously both large or low – and therefore correlated – because each increases in periods of very hot weather.
  2. The processes might not be correlated at all, but just by chance due to random variation in the data, they look correlated. This is unlikely to happen with just a single pair of processes, but if we scan through enough possible pairs, some are bound to.

Most likely, for series like the one shown in the graph above, there’s a bit of both of these effects in play. Crude oil imports and deaths by train collisions have probably both diminished in time for completely unrelated reasons. This is the first of those effects, where time is the lurking variable having a similar effect on both oil imports and train collisions. But on top of that, the random looking oscillations in the curves, which occur at around the same times for each series, are probably just chance coincidences. Most series that are uncorrelated won’t share such random-looking variations, but once in so often they will, just by chance. And the processes shown in the graph above might be the one pair out of thousands that have been examined which have this unusual similarity just by chance.

So, for both these reasons, correlation between variables doesn’t establish a causal relationship. And that’s why the cartoon above is funny. But if we can’t use correlation to establish whether a relationship is causal or not, what can we use?

We’ll discuss this in a future post.

Meantime, just in case you haven’t had your fill of spurious correlations, you can either get a whole book-full of them at Amazon  or use this page to explore many other possible examples.


I recently read that more than 250 people died between 2011 and 2017 taking selfies (so-called killfies). A Wikipedia entry gives a list of some of these deaths, as well as injuries, and categorises the fatalities as due to the following causes:

  • Transport
  • Electrocution
  • Fall
  • Firearm
  • Drowned
  • Animal
  • Other

If you have a macabre sense of humour it makes for entertaining reading while also providing you with useful life tips: for example, don’t take selfies with a walrus.

More detail on some of these incidents can also be found here.

Meanwhile, this article includes the following statistically-based advice:

Humanity is actually very susceptible to selfie death. Soon, you will be more likely to die taking a selfie than you are getting attacked by a shark. That’s not me talking: that’s statistical likelihood. Stay off Instagram and stay alive

Yes, worry less about sharks, but a bit more about Instagram. Thanks Statistics.

The original academic article which identified the more than 250 selfie deaths is available here. It actually contains some interesting statistics:

  • Men are more susceptible to death-by-selfie than women, even though women take more selfies;
  • Most deaths occur in the 20-29 age group;
  • Men were more likely to die taking high-risk selfies than women;
  • Most selfie deaths due to firearms occurred in the United States;
  • The highest number of selfie deaths is in India.

None of these conclusions seems especially surprising to me, except the last one. Why India? Have a think yourself why that might be before scrolling down:


There are various possible factors. Maybe it’s because the population in India is so high. Maybe people just take more selfies in India. Maybe the environment there is more dangerous. Maybe India has a culture for high risk-taking. Maybe it’s a combination of these things.

Or maybe… if you look at the academic paper I referred to above, the authors are based at Indian academic institutes and describe their methodology as follows:

We performed a comprehensive search for keywords such as “selfie deaths; selfie accidents; selfie mortality; self photography deaths; koolfie deaths; mobile death/accidents” from news reports to gather information regarding selfie deaths.

I have no reason to doubt the integrity of these scientists, but it’s easy to imagine that their knowledge of where to look in the media for reported selfie deaths was more complete for Indian sources than for those of other countries. In which case, they would introduce an unintentional bias in their results by accessing a disproportionate number of reports of deaths in India.

In conclusion: be sceptical about any statistical analysis. If the sampling is biased for any reason, the conclusions almost certainly will be as well.

Proof reading

In an earlier post I described what’s generally known as the Mutilated Chessboard Puzzle. It goes like this: a chessboard has 2 diagonally opposite corners removed. The challenge is to cover the remaining 62 squares with 31 dominoes, each of which can cover 2 adjacent horizontal or vertical squares. Or, to show that such a coverage is impossible.

Several of you wrote to me about this, in many cases providing the correct solution. Thanks and congratulations to all of you.

The correct solution is that it is impossible to cover the remaining squares of the chessboard this way. But what’s interesting about this puzzle – to me at least – is what it illustrates about mathematical proof.

There would essentially be 2 ways to prove the impossibility of  a domino coverage. One way would be to enumerate every possible configuration of the 31 dominoes, and to show that none of these configurations covers the 62 remaining squares on the chessboard. But this takes a lot of time – there are many different ways of laying the dominoes on the chessboard.

The alternative approach is to ‘step back’ and try to reason logically why such a configuration is impossible. This approach won’t always work, but it’s often short and elegant when it does. And with the mutilated chessboard puzzle, it works beautifully…

When you place a domino on a chessboard it will cover 1 black and 1 red square (using the colours in the diagram above). So, 31 dominoes will cover 31 black and 31 red squares. But if you remove diagonally opposite corners from a chessboard, they will be of the same colour, so you’re left with either 32 black squares and 30 red, or vice versa. But you’re never left with 31 squares of each colour, which is the only pattern possible with 31 dominoes. So it’s impossible and the result is proved. Simply and beautifully.

As I mentioned in the previous post the scientific writer Cathy O’Neil cites having been shown this puzzle by her father at a young age as the trigger for her lifelong passion for mathematics. And maybe, even if you don’t have a passion for mathematics yourself, you can at least see why the elegance of this proof might trigger someone’s love for mathematics in the way it did for Cathy.

Having said all that, computer technology now makes proof by enumeration possible in situations where the number of configurations to check might be very large. But structured mathematical thinking is still often necessary to determine the parameters of the search. A good example of this is the well-known four colour theorem. This states that if you take any region that’s been divided into sub-regions – like a map divided into countries – then you only need four colours to shade the map in such a way that no adjacent regions have the same colour.

Here’a an example from the Wiki post:

You can see that, despite the complexity of the sub-regions, only 4 colours were needed to achieve a colouring in which no two adjacent regions have the same colour.

But how would you prove that any map of this type would require at most 4 colours? Ideally, as with the mutilated chessboard puzzle, you’d like a ‘stand back’ proof, based on pure logic. But so far no one has been able to find one. Equally, enumeration of all possible maps is clearly impossible – any region can be divided into subregions in infinitely many ways.

Yet a proof has been found which is a kind of hybrid of the ‘stand back’ and ‘enumeration’ approaches. First, a deep understanding of mathematical graphs was used to reduce the infinitely many possible regions to a finite number – actually, around 2000 – of maps to consider. That’s to say, it was shown that it’s not necessary to consider all possible regional mappings – if a 4-colour shading of a certain set of  2000ish different maps could be found, this would be enough to prove that such a shading existed for all possible maps. Then a computer algorithm was developed to search for a 4-colour shading for each of the identified 2000 or so maps.  Putting all of this together completed the proof that a 4-colour shading existed for any map, not just the ones included in the search.

Now, none of this is strictly Statistics, though Cathy O’Neil’s book that I referred to in the previous post is in the field of data science, which is at least a close neighbour of Statistics. But in any case, Statistics is built on a solid mathematical framework, and things that we’ve seen in previous posts like the Central Limit Theorem – the phenomenon by which the frequency distributions of many naturally occurring phenomena end up looking bell-shaped – are often based on the proof of a formal mathematical expression, which in some cases is as simple and elegant as that of the mutilated chessboard puzzle.

I’ll stop this thread here so as to avoid a puzzle overload, but I did want to mention that there is an extension of the Mutilated Chessboard Puzzle. Rather than removing 2 diagonally opposite corners, suppose I remove any 2 arbitrary squares, possibly adjacent, possibly not. In that case, can the remaining squares be covered by 31 dominoes?

If the 2 squares removed are of the same colour, the solution given above works equally well, so we know the problem can’t be solved in that case. But what if I remove one black and one red square? In that case, can the remaining squares be covered by the 31 dominoes:

  1. Always;
  2. Sometimes; or
  3. Never?

I already sent this problem to some of you who’d sent me a solution to the original problem. And I should give a special mention to  who provided a solution which is completely different to the standard textbook solution. Which illustrates another great thing about mathematics: there is often more than solution to the same problem. If you’d like to try this extension to the original problem, or discuss it with me, please drop me a line.



Statistics of the decade

Now that the nights are drawing in, our minds naturally turn to regular end-of-year events and activities: Halloween; Bonfire night; Christmas; New Year’s eve; and the Royal Statistical Society ‘Statistics of the Year’ competition.

You may remember from a previous post that there are 2 categories for Statistic of the Year: ‘UK’ and ‘International’. You may also remember that last year’s winners were 27.8% and 90.5% respectively. (Don’t ask, just look back at the previous post).

So, it’s that time again, and you are free to nominate your own statistics for the 2019 edition. Full details on the criteria for nominations are given at the RSS link above, but suggested categories include:

  • A statistic that debunks a popular myth;
  • A statistic relevant to a key news story or social trend;
  • A statistic relevant to a phenomenon/craze this year.

But if that’s not exciting enough, this year also sees the end of the decade, so you are also invited to nominate for ‘Statistic of the Decade’, again in UK and International categories. As the RSS say:

The Royal Statistical Society is not only looking for statistics that captured the zeitgeist of 2019, but as the decade draws to a close, we are also seeking statistics that can help define the 2010s.

So, what do you think? What statistics captured 2019’s zeitgeist for you? And which statistics helped define your 2010’s?

Please feel free to nominate to the RSS yourselves, but if you send me your nomination directly, I’ll post a collection of the replies I receive.

Thanks to for pointing out to me that the nominations for this year were now open.

Love it or hate it

A while ago I wrote a post about the practice of advertistics – the use, and more often misuse, of Statistics by advertising companies to promote their products. And I referenced an article in the Guardian which included a number of examples of advertistics. One of these examples was Marmite.

You probably know the line: Marmite – you either love it or hate it. That’s an advertisitic in itself. And almost certainly provably incorrect – I just have to find one person who’s indifferent to Marmite.

But I want to discuss a slightly different issue. This ‘love or hate Marmite’ theme has turned up as an advertistic for a completely different product…

DNAfit is one of a number of do-it-yourself DNA testing kits. Here’s what they say about themselves:

DNAfit helps you become the best possible version of yourself. We promise a smarter, easier and more effective solution to health and fitness, entirely unique to your DNA profile. Whatever your goal, DNAfit will ensure you live a longer, happier and healthier life.

And here’s the eminent statistician, er, Rio Ferdinand, to persuade you with statistical facts as to why you should sign up with DNAfit.

But where’s the Marmite?

Well, as part of a campaign that was purportedly setup to address a decline in Marmite sales, but was coincidentally promoted as an advertistic for the DNAfit testing kit, a scientific project was set up to find genetic markers that identify whether a person will be a lover or hater of Marmite. (Let’s ignore, for the moment, the fact that the easiest way to discover if a person is a ‘lover’ or ‘hater’ of Marmite is simply to ask them.)

Here’s a summary of what they did:

  • They recruited a sample of 261 individuals;
  • For each individual, they took a DNA sample;
  • They also questioned the individuals to determine whether they love or hate Marmite;
  • They then applied standard statistical techniques to identify a small number of genetic markers that separate the Marmite lovers from the haters. Essentially, they looked for a combination of DNA markers which were present in the ‘haters’, but absent in the ‘lovers’ (or vice versa).

Finally, the study was given a sheen of respectability through the publication of a white paper with various genetic scientists as authors.

But, here’s the typical reaction of another scientist on receiving a press release about the study:

Wow, sorry about the language there. So, what’s wrong?

The Marmite gene study is actually pretty poor science. One reason, as explained in this New Scientist article, is that there’s no control for environmental factors. For example, several members of a family might all love Marmite because the parents do and introduced their kids to it at a very early age. The close family connection will also mean that these individuals have similar DNA. So, you’ll find a set of genetic characteristics that each of these family members have, and they all also love Marmite. Conclusion – these are genetic markers for loving Marmite. Wrong: these are genetic markers for this particular family who, because they share meals together, all love Marmite.

I’d guess there are other factors too. A sample of 261 seems rather small to me. There are many possible genetic markers, and many, many more combinations of genetic markers. With so many options it’s almost certain that purely by chance in 261 individuals you can find one set of markers shared only by the ‘lovers’ and another set shared only by the ‘haters’. We’ve seen this stuff before: look at enough things and something unlikely is bound to occur just by chance. It’s just unlikely to happen again outside of the sample of individuals that took part in the study.

Moreover, there seems to have been no attempt at validating the results on an independent set of individuals.

Unfortunately for DNAfit and Marmite, they took the campaign one stage further and encouraged Marmite customers – and non-customers – to carry out their own DNA test to see if they were Marmite ‘lovers’ or ‘haters’ using the classification found in the genetic study. If only they’d thought to do this as part of the study itself. Because although the test claimed to be 99.98% accurate, rather many people who paid to be tested found they’d been wrongly classified.

One ‘lover’ who was classified as a ‘hater’ wrote:

I was genuinely upset when I got my results back. Mostly because, hello, I am a ‘lover’, but also because I feel like Marmite led me on with a cheap publicity tool and I fell for it. I feel dirty and used.

While a wrongly-classified ‘hater’ said:

I am somewhat offended! I haven’t touched Marmite since I was about eight because even just the thought of it makes me want to curl up into a ball and scrub my tounge.

Ouch! ‘Dirty and used’. ‘Scrub my tongue’. Not great publicity for either Marmite or DNAfit, and both companies seem to have dropped the campaign pretty quickly and deleted as many references to it as they were able.

Ah, the price of doing Statistics badly.

p.s. There was a warning in the ads about a misclassification rate higher than 0.02% but they just dismissed it as fake news…



A day in the life

Over the next few weeks I’m planning to include a couple of posts looking at the way Statistics gets used – and often misused – in the media.

First though, I want to emphasise the extent to which Statistics pervades news stories. It’s everywhere. But we’re so accustomed to this fact, we hardly pay attention. So, I chose a day randomly last year – when I first planned this post – and made a note of all the articles that I came across which were based one way or another on Statistics.

In no particular order….

Article 1: An analysis of the ways the economy had been affected to date since the Brexit referendum.

Article 2: A report in your super soaraway Sun about research which shows 40% of the British population don’t hold cutlery correctly. (!)

Article 3: A BBC report about a study into heart defects and regeneration rates in Mexican tetra fish which may offer clues to help reduce heart disease rates in humans.

Article 4: A report showing that children’s school performance may be affected by their exact age on entry.

Article 5: A report into the rates of prescriptions of anti-depressants to children and the possible consequences of this.

Article 6: A survey of the number of teenage gamblers.

Article 7: A report on projections of the numbers of people who could be affected by future insulin shortages.

Article 8: A report on a study that suggests children’s weights are not driven by patterns of parental feeding, but rather the opposite: parents tend to adapt feeding patterns to the natural weight of their children.

Article 9: A comparison of football teams in terms of performance this season relative to last season.

Article 10: Not really about statistics exactly, but a report showing that the UK’s top-paid boss is Denise Coates, the co-founder of Bet365, who has just had a pay-rise of £265. Inludes a nice graphic showing how her salary has risen year-on-year.

Article 11: Report on a study showing failure rates of cars in MOT tests due to excessive emission rates.

Article 12: A report into an increase in the rate of anti-depressant prescriptions following the EU referendum.

Article 13: A report on rates of ice-melt in Antartica that suggest a sub-surface radioactive source.

Article 14: A report suggesting rats are getting bigger and what the implications might be.

Article 15: An explanation of algorithms that can distinguish between human and bot conversations.

Article 16: A report suggesting that global internet growth is slowing.

So that’s 16 articles in the papers I happened to look at on a random day. Pretty sure I could have picked any day and any set of papers and it would have been a similar story.

Now here’s a challenge: choose your own day and scan the papers (even just the online versions) to see how many stories have an underlying statistical content. And if you find something that’s suitable for the blog, please pass it on to me – that would be a great bonus.

When I was a kid I went on a school exchange trip to Germany. For some reason we had a lesson with our German hosts in which we were asked to explain the meaning of the Beatles’ ‘A Day in the Life’….

Embarrassingly, I think I tried to give a  literal word-by-word interpretation. But if I’d known then what I know about Statistics now, I think I could probably have made a better effort.

Here are the lyrics from one of the verses…

Ah I read the news today, oh boy
Four thousand holes in Blackburn, Lancashire
And though the holes were rather small
They had to count them all
Now they know how many holes it takes to fill the Albert Hall

Weapons of math destruction

I haven’t read it, but Cathy O’Neil’s ‘Weapons of Math Destruction‘  is a great title for a book. Here’s what one reviewer wrote:

Cathy O’Neil an experienced data scientist and mathematics professor illustrates the pitfalls of allowing data scientists to operate in a moral and ethical vacuum including how the poor and disadvantaged are targeted for payday loans, high cost insurance and political messaging on the basis of their zipcodes and other harvested data.

So, WOMD shows how the data-based algorithms that increasingly form the fabric of our lives – from Google to Facebook to banks to shopping to politics – and the statistical methodology behind them are actually pushing societies in the direction of greater inequality and reduced democracy.

At the time of writing WOMD these arguments were still in their infancy; but now we are starting to live the repercussions of the success of the campaign to remove Britain from the EU – which was largely driven by a highly professional exercise in Data Science – they seem much more relevant and urgent.

Anyway, Cathy O’Neil herself recently gave an interview to Bloomberg. Unfortunately, you now have to subscribe to read the whole article, so you won’t see much if you follow the link. But it was an interesting interview for various reasons. In particular, she discussed the trigger which led her to a love of data and mathematics. She wrote that when she was nine her father showed her a mathematics puzzle. And solving that problem led Cathy to a lifelong appreciation of the power of mathematical thinking. She wrote..

… I’ve never felt more empowered by anything since.

It’s more of a mathematical than a statistical puzzle, but maybe you’d like to think about it for yourself anyway…

Consider this diagram:

It’s a chessboard with 2 of the corner squares removed. Now, suppose you had a set of 31 dominoes, with each domino being able to cover 2 adjacent horizontal or vertical squares. Your aim is to find a way of covering the 62 squares of the mutilated board with the 31 dominoes. If you’d like to try it, mail me with either a diagram or photo of your solution; or, if you think it can’t be done, mail me an explanation. I’ll discuss the solution in a future post.