Happy International Day of Happiness

Did you know March 20th is the International Day of Happiness? Did you even know there was an International Day of Happiness?

Anyway, just in case you’re interested, the UN, which founded the day and organises associated annual events, produces an annual report which is essentially a statistical analysis that determines the extent of happiness in different countries of the world. It turns out that the happiest country right now is Finland, while the least happy is South Sudan. The UK is 15th. I’ll get back to you in a year’s time to let you know if we end up moving closer to Finland or South Sudan in the happiness stakes post-Brexit.

It’s not your fault (maybe)

 

Most of you who came through the UK school system will have taken GCSE’s at the end of your secondary school education. But did that occur in a year that was even or an odd number? If it was an even number, I have good news for you: a ready-made and statistically validated excuse as to why your results weren’t as good as they could have been.

A recent article in the Guardian pointed to academic research which compared patterns of GCSE results in years with either a World Cup or Euro tournament final – i.e. even-numbered years – with those of other years – i.e. odd-numbered years. They found, for example, that the chances of a student achieving 5 good GCSE grades is 12% lower for students in a tournament year compared with a non-tournament year. This is a big difference, and given the size of the study, strongly significant in statistical terms. In other words, it’s almost impossible that a difference of this magnitude could have occurred by chance if there were really no effect.

The implication of the research is that the World Cup and Euros, which take place at roughly the same time as GCSE final examinations, have a distracting effect on students, leading to poorer results. Now, to be clear: the analysis cannot prove this claim. The fact that there is a 2-year cycle in quality of results is beyond doubt. But this could be due to any cause which has a 2-year cycle that coincides with GCSE finals (and major football finals). But, what could that possibly be?

Moreover, here’s another thing: the difference in performance in tournament and non-tournament years varies among types of students, and is greatest for the types of students that you’d guess are most likely to be distracted by football.

  1. The effect is greater for boys than for girls, though it is also present and significant for girls.
  2. The difference in performance (of achieving five or more good GCSE grades) reaches 28% for white working class boys.
  3. The difference for black boys with a Caribbean background is similarly around 28%.

So, although it requires a leap of faith to assume that the tournament effect is causal rather than coincidental so far as GCSE performance goes, the strength of circumstantial evidence is such that it’s a very small leap of faith in this particular case.

 

The numbers game

If you’re reading this post, you’re likely to be aware already of the importance of Statistics and data for various aspects of sport in general and football in particular. Nonetheless, I recently came across this short film, produced by FourFourTwo magazine, which gives a nice history of the evolution of data analytics in football. If you need a refresher on the topic, this isn’t a bad place to look.

And just in case you don’t think that’s sufficient to justify this post in a Statistics blog, FourFourTwo claims to be ‘the world’s biggest football magazine’. Moreover, many of the articles on the magazine’s website are analytics-orientated. For example: ‘Ronaldo averaged a game every 4.3 days‘. Admittedly, many of these articles are barely-disguised advertisements for a wearable GPS device intended for tracking activity of players during matches. But I suppose even 199 (pounds)  is a number, right?

 

Happy Pi day

Oops, I almost missed the chance to wish you a happy Pi day. So, almost belatedly:

Happy Pi day!!!

You probably know that Pi – or more accurately, 𝜋 – is one of the most important numbers in mathematics, occurring in many surprising places.

Most people first come across 𝜋 at school, where you learn that it’s the ratio between the circumference and the diameter of any circle. But 𝜋 also crops up in almost every other area of mathematics as well, including Statistics.  In a future post I’ll give an example of this.

Meantime, why is today Pi day? Well, today is March 14th, or 3/14 if you’re American. And the approximate value of Pi is 3.14. more accurately, here’s the value of 𝜋 to 100 digits:

3.14159 26535 89793 23846 26433 83279 50288 41971 69399 37510 58209 74944 59230 78164 06286 20899 86280 34825 34211 7067

Not enough for you? You can get the first 100,000 digits here.

But that’s just a part of the story. You probably also know that Pi is what’s known as an irrational number, which means that its decimal representation is infinite and non-repeating. And today it was announced that Pi has just been computed to an accuracy of 31.4 trillion decimal digits, beating the previous most accurate computation by nearly 9 trillion digits.

That’s impressive computing power, obviously, but how about simply remembering the digits of 𝜋? Chances are you remembered from school the first three digits of 𝜋: 3, 1, 4. But the current world record for remembering the value of 𝜋 is 70,030 digits, which is held by Suresh Kumar Sharma of India? And almost as impressively, here’s an 11-year old kid who managed 2091 digits.

Like I say, I’ll write about 𝜋’s importance in Statistics in another post.

Meantime, here’s Homer:

 

Love Island

A while back Harry.Hill@smartodds.co.uk gave a talk to the (then) quant team about trading strategies. The general issue is well-known: traders have to decide when to place a bet. Generally speaking they can place a bet early, when the price – the amount you get if you win the bet – is likely to be reasonably attractive. But in that case the liquidity of the market – the amount of money you can bet against – is likely to be low. Or they can wait until there is greater liquidity, but then the price is likely to be less attractive. So, given the option of a certain bet size at a stated price, should they bet now or wait in the hope of being able to make a bigger bet, albeit at a probably poorer price?

In general this is a difficult problem to tackle, and to make any sort of progress some assumptions have to be made about the way both prices and liquidity are likely to change as kick-off approaches. And Harry was presenting some tentative ideas, and pointing out some relevant research, that might enable us to get a handle on some of these issues.

Anyway, one of the pieces of work Harry referred to is a paper by F. Thomas Bruss, which includes the following type of example. You play a game where you can throw a dice (say) 10 times. Your objective is to throw a 6, at which point you can nominate that as your score, or continue.  But, here’s the catch: you only win if you throw a 6 and it’s the  final 6 in the sequence of 10 throws.

So, suppose you throw a 6 on the 3rd roll; should you stop? How about the 7th roll? Or the 9th? You can maybe see the connection with the trading issue: both problems require us to choose whether to stop or continue, based on an evaluation of the risk of what will subsequently occur.

Fast-forward a few days after Harry’s talk and I was reading Alex Bellos’s column in the Guardian. Alex is a journalist who writes about both football and mathematics (and sometimes both at the same time). His bi-weekly contributions to the Guardian take the form of mathematically-based puzzles. These puzzles are quite varied, covering everything from logic to geometry to arithmetic and so on. And sometimes even Statistics. Anyway, the puzzle I was reading after Harry’s talk is here. If you have time, take a read. Otherwise, here’s a brief summary.

It’s a basic version of Love Island. You have to choose from 3 potential love partners, but you only see them individually and sequentially. You are shown the first potential partner, and can decide to keep them or not. If you keep them, everything stops there. Otherwise you are shown the second potential partner. Again, you have to stick or twist: you can keep them, or you reject and are shown the third possibility. And in that case you are obliged to stick with that option.

In summary: once you stick with someone, that’s the end of the game. But if you reject someone, you can’t go back to them later. The question is: what strategy should you adopt in order to maximise the chances of choosing the person that you would have picked if you had seen all 3 at the same time?

Maybe have a think about this before reading on.

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

As well as giving a clearer description of the problem, Alex’s article also contains a link to his discussion of the solution. But what’s interesting is that it’s another example of an optimal stopping problem: once we’ve seen a new potential partner, and also previous potential partners, we have to make a decision on whether to stop with what we currently have, or risk trying to get an improvement in the future, knowing that we could also end up with something/someone worse. And if we can solve the problem for love partners, we are one step towards solving the problem for traders as well.


The Love Island problem discussed by Alex is actually a special case of The Secretary Problem.  A company needs to hire a secretary and does so by individual interviews. Once they’ve conducted an interview they have to hire or reject that candidate, without the possibility of returning to him/her once rejected. What strategy should they adopt in order to try to get the best candidate? In the Love Island version, there are just 3 candidates; in the more general problem, there can be any number. With 3 choices, and a little bit of patience, you can probably find the solution yourself (or follow the links towards Alex’s discussion of the solution). But how about if you had 1000 possible love partners? (Disclaimer: you don’t).

Actually, there is a remarkably simple solution to this problem whatever the number of options to choose from: whether it’s 3, 1000, 10,000,000 or whatever. Let this number of candidates be N. Then reject all candidates up to the M’th for some value of M, but keep note of the best candidate, C say, from those M options. Then accept the first subsequent candidate who is better than C in subsequent interviews (or the last candidate if none happens to be better).

But how to choose M? Well, even more remarkably, it turns out that if N is reasonably large, the best choice for M is around N/e, where e \approx 2.718 is a number that crops up a lot in mathematics. For N=1000 candidates, this means rejecting the first 368 and then choosing the first that is better than the best of those. And one more remarkable thing about this result: the probability that the candidate selected this way is actually the best out of all the available candidates is 1/e, or approximately 37%, regardless of the value of N. 

With N=3, the value of N is too small for this approximate calculation of M to be accurate, but if you calculated the solution to the problem – or looked at Alex’s – you’ll see that the solution is precisely of this form, with M=2 and a probability of 50% of picking the best candidate overall.

Anyway, what I really like about all this is the way things that are apparently unconnected – Love Island, choosing secretaries, trading strategies – are fundamentally linked once you formulate things in statistical terms. And even if the solution in one of the areas is too simple to be immediately transferable to another, it might at least provide useful direction. 

Mr. Greedy

In parallel to my series of posts on famous statisticians, I also seem to be running a series of posts on characters from the Mr. Men books. Previously we had a post about Mr. Wrong. And now I have to tell you about Mr. Greedy. In doing so, you’ll hopefully learn something about the limitation of Statistics.


It was widely reported in the media last weekend (here, here and here for example) that a recent study had shown that the Mr. Greedy book is as complex a work of literature as various American classics including ‘Of Mice and Men’ and ‘The Grapes of Wrath’, each by John Steinbeck, the latter having won the Pulitzer prize for literature.

To cut a long story short, the authors of the report have developed a method of rating the readability of a book, based essentially on the complexity and phrasing of the words that it uses. They’ve done this by measuring these features for a large number of books, asking people to read the books, measuring how much they understood, and then creating a map from one to the other using standard regression techniques from Statistics. A detailed, though – irony alert! – not very easily readable, description of the analysis is given here.

The end result of this process is a formula which takes the text of a book and converts it into a ‘readability’ score. Mr. Greedy got a score of 4.4, ‘Of Mice and Men’ got 4.5 and ‘The Grapes of Wrath’ got 4.9. The most difficult book in the database was ‘Gulliver’s Travels’, with a score of 13.5. You can check the readability index value – labelled BL for ‘Book Level’ – for any book in the database by using this dialog search box.

So, yes, Mr. Greedy is almost as complex a piece of literature as the Steinbeck classics.

But… there’s a catch, of course. Any statistical analysis is limited to its own terms of reference, which in this case means that comprehension is measured in a strictly literal sense; not comprehension in a narrative sense. In other words, no attempt was made to assess whether readers understood the sum total in a literary sense of what they were reading, just the individual words and sentences. As such, the values 4.4, 4.5 or anything else say nothing about how difficult a book is to read in terms of narrative comprehension. Sure, the words and sentence structure of Mr. Greedy and The Grapes of Wrath are of similar complexity, but having understood the words in both, understanding the full meaning of Mr. Greedy is likely to be an easier task.

Does this have any relevance at all to sports modelling? Admittedly, not much. Except, it’s always important to understand what has, and has not, been included in a sports model. For example, in a football model based only on goals, when using predictions, it’s relevant to consider making adjustments if you are aware that a team has been especially unlucky in previous games (hit the post; marginal offside; etc etc). But if the model itself already included data of this type in its formulation, then it’s likely to be incorrect to make further adjustments, as doing so would be to double-count the effects.

In summary, if you are using a statistical model or analysis, make sure you know what it includes so as to avoid double-counting in sports models, or buying your 2 year-old nephew a Pulitzer prize winning American masterpiece for their birthday.

 

Ernie is dead, long live Ernie

Oh no, this weekend they killed Ernie

Well, actually, not that one. This one…

No, no, no. That one died some time ago. This one…

But don’t worry, here’s Ernie (mark 5)…

Let me explain…

Ernie (Electronic Random Number Indicator Equipment) is the acronym of the random number generator that is used by the government’s National Savings and Investments (NSI) department for selecting Premium Bond winners each month.

Premium bonds are a form of savings certificates. But instead of receiving a fixed or variable interest rate paid at regular intervals, like most savings accounts, premium bonds are a gamble. Each month a number of bonds from all those in circulation are selected at random and awarded prizes, with values ranging from £25 to £1,000,000. Overall, the annual interest rate is currently around 1.4%, but with this method most bond holders will receive 0%, while a few will win many times more than the actual bond value of £1, up to one million pounds.

So, your initial outlay is safe when you buy a premium bond – you can always cash them in at the price you paid for them – but you are gambling with the interest.

Now, the interesting thing from a statistical point of view is the monthly selection of the winning bonds. Each month there are nearly 3 million winning bonds, most of which win the minimum prize of £25, but 2 of which win the maximum of a million pounds. All these winning bonds have to be selected at random. But how?

As you probably know, the National Lottery is based on a single set of numbers that are randomly generated through the physical mechanism of the mixing and selection of numbered balls. But this method of random number generation is completely impractical for the random selection of several million winning bonds each month. So, a method of statistical simulation is required.

In a previous post we already discussed the idea of simulation in a statistical context. In fact, it turns out to be fairly straightforward to generate mathematically a series of numbers that, to all intents and purposes, look random. I’ll discuss this technique in a future post, but the basic idea is that there are certain formulae which, when used recursively, generate a sequence of numbers that are essentially indistinguishable from a series of random numbers.

But here’s the thing: the numbers are not really random at all. If you know the formula and the current value in the sequence, you can calculate exactly the next value in the sequence. And the next one. And so on.

Strictly, a sequence of numbers generated this way is called ‘pseudo-random’, which is a fancy way of saying ‘pretend-random’. They look random, but they’re not. For most statistical purposes, the difference between a sequence that looks random and is genuinely random is unimportant, so this method is used as the basis for simulation procedures. But for the random selection of Premium Bond winners, there are obvious logistic and moral problems in using a sequence of numbers that is actually predictable, even if it looks entirely random.

For this reason, Ernie was invented. Ernie is a random number generator. But to ensure the numbers are genuinely random, it incorporates a genuine physical process whose behaviour is entirely random. A mathematical representation of the state of this physical process then leads to the random numbers.

The very first Ernie is shown in the second picture above. It was first used in 1957, was the size of a van and used a gas neon diode to induce the randomness. Though effective, this version of Ernie was fairly slow, generating just 2000 numbers per hour. It was subsequently killed-off and replaced with ever-more efficient designs over the years.

The third picture above shows Ernie (mark 4), which has been in operation from 2004 up until this weekend. In place of gas diodes, it used thermal noise in transistors to generate the required randomness, which in turn generated the numbers. Clearly, in terms of size, this version was a big improvement on Ernie (mark 1), being about the size of a normal PC. It was also much more efficient, being able to generate one million numbers in an hour.

But Ernie (mark 4) is no more. The final picture above shows Ernie (mark 5), which came into operation this weekend, shown against the tip of a pencil. It’s essentially a microchip. And of course, the evolution of computing equipment the size of a van to the size of a pencil head over the last 60 years or so is a familiar story. Indeed Ernie (mark 5) is considerably faster – by a factor of 42.5 or so – even compared to Ernie (mark 4), despite the size reduction. But what really makes the new version of Ernie stand out is that the physical process that induces the randomness has fundamentally changed. One way or another, all the previous versions used thermal noise to generate the randomness; Ernie (mark 5) uses quantum random variation in light signals.

More information on the evolution of Ernie can be found here. A slightly more technical account of the way thermal noise was used to generate randomness in each of the Ernie’s up to mark 4 is given here. The basis of the quantum technology for Ernie mark 5 is that when a photon is emitted towards a semi-transparent surface, is either reflected or transmitted at random. Converting these outcomes into 0/1 bit values, forms the building block of random number generation.

Incidentally, although the randomness in the physical processes built into Ernie should guarantee that the numbers generated are random, checks on the output are carried out by the Government Actuary’s Department to ensure that the output can genuinely be regarded as random. In fact they apply four tests to the sequence:

  1. Frequency: do all digits occur (approximately) equally often?
  2. Serial: do all consecutive number pairs occur (approximately) equally often?
  3. Poker: do poker combinations (4 identical digits; 3 identical digits; two pairs; one pair; all different) occur as often as they should in consecutive numbers?
  4. Correlation: do pairs of digits at different spacings in bond numbers have approximately the correct correlation that would be expected under randomness?

In the 60 or so years that premium Bonds have been in circulation, the monthly numbers generated by each of the successive Ernie’s have never failed to pass these tests.

However:


Finally, in case you’re disappointed that I started this post with a gratuitous reference to Sesame Street which I didn’t follow-up on, here’s a link to 10 facts and statistics about Sesame Street.

March Madness

 

It’s sometimes said that a little knowledge is a dangerous thing. Arguably, too much knowledge is equally bad. Indeed, Einstein is quoted as saying:

A little knowledge is a dangerous thing. So is a lot.

I don’t suppose Einstein had gambling in mind, but still…

March Madness pools are a popular form of betting in the United States. They are based on the playoff tournament for NCAA college basketball, held annually every March, and comprise a so-called bracket bet. Prior to the tournament start, a player predicts the winners of each game from the round-of-sixteen right through to the final. This is possible since teams are seeded, as in tennis, so match pairings for future rounds are determined automatically once the winners from previous rounds are known. In practice, it’s equivalent to picking winners from the round-of-sixteen onwards in the World Cup.

There are different scoring systems for judging success in bracket picks, often with more weight given to correct outcomes in the later rounds, but in essence the more correct outcomes a gambler predicts, the better their score. And the player with the best score within a pool of players wins the prize. 

Naturally, you’d expect players with some knowledge of the differing strength of the teams involved in the March Madness playoffs to do better than those with no knowledge at all. But is it the case that the more knowledge a player has, the more successful they’re likely to be? In other words:

To what extent is success in the March Madness pools determined by a player’s basketball knowledge?

This question was explored in a recent academic study discussed here. In summary, participants were given a 25-question basketball quiz, the results of which were used to determine their level of basketball knowledge. Next, they were asked to make their bracket picks for the March Madness. A comparison was then made between accuracy of bracket picks and level of basketball knowledge.

The results are summarised in the following graph, which shows the average relationship between pick accuracy and basketball knowledge:

As you’d expect, the players with low knowledge do relatively badly.  Then, as a player’s basketball knowledge increases, so does their pick accuracy. But only up to a point. After a certain point, as a player’s knowledge increases, their pick accuracy was found to decrease. Indeed, the players with the most basketball knowledge were found to perform slightly worse than those with the least knowledge!

Why should this be?

The most likely explanation is as follows…

Consider an average team, who have recently had a few great results. It’s possible that these great results are due to skill, but it’s also plausible that the team has just been a bit lucky. The player with expert knowledge is likely to know about these recent results, and make their picks accordingly. The player with medium knowledge  will simply know that this is an average team, and also bet accordingly. While the player with very little knowledge is likely to treat the team randomly.

Random betting due to lack of knowledge is obviously not a great strategy. However, making picks that are driven primarily by recent results can be even worse, and the evidence suggests that’s exactly what most highly  knowledgable players do. And it turns out to be better to have just a medium knowledge of the game, so that you’d have a rough idea of the relative rankings of the different teams, without being overly influenced by recent results. 

Now, obviously, someone with expert knowledge of the game, but who also knows how to exploit that knowledge for making predictions, is likely to do best of all. And that, of course, is the way sports betting companies operate, combining expert sports knowledge with statistical support to exploit and implement that knowledge. But the study here shows that, in the absence of that explicit statistical support, the player with a medium level of knowledge is likely to do better than players with too little or too much knowledge. 


In some ways this post complements the earlier post ‘The benefit of foresight’. The theme of that post was that successful gambling cannot rely solely on Statistics, but also needs the input of expert sports knowledge. This one says that expert knowledge, in isolation, is also insufficient, and needs to be used in tandem with statistical expertise for a successful trading strategy. 

In the specific context of betting on the NCAA March Madness bracket, the argument is developed fully in this book. The argument, though, is valid much more widely across all sports and betting regimes, and emphasises the importance to a sports betting company of both statistical and sport expertise.

 

 


Update (21/3): The NCAA tournament actually starts today. In case you’re interested, here’s Barack Obama’s bracket pick. Maybe see if you can do better than the ex-President of the United States…

The origin of all chavs upon this earth

This is a true story which includes an illustration of how interesting statistical questions can arise in simple everyday life. It’s a bit long though, so I’ll break it down into two posts. In this one, I’ll give you the background information. In a subsequent post, I’ll discuss a possible solution to the problem that arises.


As many of you know, I live in Italy. Actually, in a small town called Belluno in the north-east of Italy, on the edge of the dolomites. It’s great, but like most people’s life journey, my route here hasn’t been straightforward.

I grew up on a dismal overflow council estate called Leigh Park, on the distant outskirts of Portsmouth. Leigh Park was once the largest council estate in Europe and, according to this article, “could well be the origin of all chavs upon this earth”. (Just in case you’re unfamiliar with the term chav, Wikipedia gives this definition: “A pejorative epithet used in the United Kingdom to describe a particular stereotype of anti-social youth dressed in sportswear”. Explains a lot, right?)

Anyway, the other day I had to take my son to the dentist in Belluno for a check-up. The waiting area in the dentist’s has recently been refurbished, and they’ve installed a large-screen TV on the main wall. But instead of showing anything interesting, the TV just seems to flip through random images: pictures of animals; of paintings; of architecture; of cities; of people; of pretty much anything. It’s probably meant to be soothing or distracting while you’re waiting for your teeth to be drilled.

So, I sat down and started looking at this TV. And the first image I saw was of a single-decker bus with destination Leigh Park (!), a little bit like this…

My first thought, obviously, was that this was a remarkably unlikely coincidence: a TV screen in Belluno, Italy, showing the image of a bus heading towards the completely unremarkable housing estate I grew up on in England. But we’ve discussed this kind of issue before: our lives are filled with many random events each day, and we only notice the ones that are coincidences. So though it seems improbable that something like this could occur, it’s much less improbable when you balance it against the many, many unremarkable things in a day which also occur.

But the main theme of the story is something different…

I wanted to point out this coincidence – which connects to part of his own family history – to my son, but by the time I managed to drag his attention away from playing on his phone, the image had changed to something else. Having nothing better to do – my potential company for this visit was just playing on his phone, remember – I kept watching the TV screen. Now, although the images kept changing, I noticed after a while that some of the images had repeated. Not in any systematic order, but apparently at random. So, my best guess is that the screen was showing images from a fixed library of pictures in a random order. As such, the image of the Leigh Park bus would show up again at some point, but the time it would take to show up would depend on the size of the library of images. If there were just a few pictures, it would probably show up again very soon; if there were very many pictures, it would most likely take a long time.

So, here’s the question:

How could I estimate the number of images in the library being shown randomly on the TV?

This seems like a reasonable sort of question to ask. I have some data – a list of images I’ve observed, together with counts of the numbers of repeats I’ve seen. And the more often I see repeats, the smaller I might expect the number of available images to be. But what strategy could I use, either based on the images I’d already observed, or by also observing extra images, to estimate the number of images in the entire library?

I have an idea of how to tackle this problem, and I’ll discuss it in a future post. But I don’t think my idea is likely to be the best approach, and I’d be interested if anyone else has an alternative, which might well prove to be better. So, please think about this problem, and if you have suggestions of your own, please send them to me at stuart.coles1111@gmail.com. I’ll include discussion of any ideas I receive in the subsequent post.

Rare stickers

In an earlier post we looked at the number of packets of Panini stickers you’d be likely to need in order to complete an album. In that post I made the throwaway and obvious remark that if some stickers were rarer than others, you’d be likely to need even more packs than the standard calculations suggest. But is there any evidence that some stickers are rarer than others?

The official Panini response is given here. In reply to the question “Does Panini deliberately print limited edition or rare stickers?”, the company answer is:

No. Every collection consists of a number of stickers printed on one or two print sheets, which in turn contain one of each sticker and are printed in the same quantity as the albums.

So that seems clear enough. But the experience of collectors is rather different, with many suggesting that some stickers are much harder to find than others. So, what’s the data and what type of statistical methods can be used to judge the evidence?

The first thing to say is that, as we saw in the previous post, the nature of collecting this way can be a frustrating experience. So that even if the average number of packs needed was less than 1000, we saw that some collectors are likely to need more than 2000, even assuming all stickers are equally frequent. To put this into perspective, suppose you’ve already collected 662 of the 682 stickers, and need just another 20 stickers to complete the album. By the same type of calculation that we made before, the expected number of further packs needed to complete the album is 491, which is well over half the expected total number of packs needed (which was 969). You might like to try that calculation yourself, using the same method as in the previous post.

In other words, once you reach the point of needing just 20 more stickers, you’re not even half-way through your collection task in terms of the number of packs you’re likely to need. This can make it seem like certain stickers – the 20 you are missing – are rarer than others, even if they’re not.

But that’s not to say that rare stickers don’t exist – it’s just that the fact you’re like to have to wait so long to get the remaining few stickers might make you feel like they are rare. So again: do rare stickers really exist?

Well, here’s a site that tried to conduct the experiment by buying 1000 packs from the 2018 world cup sticker series, listing every single sticker they got and the number of times they got it. Since each pack contains 5 stickers, this means that they collected a total of 5000 stickers. They then used the proportion of times they got each sticker as an estimate of the probability of getting each sticker. For example, they were fortunate enough to get the Belgium team photo sticker 18 times, so they estimated the probability of getting such a sticker as

18/5000 \approx 1/278

Using this method, they were able to calculate which team collectively were the easiest and most difficult to complete. The results are summarised in the following figure:

On this basis, stickers of players from Senegal and Columbia were easy to obtain, while those of players from Belgium and Saudi Arabia were much harder. So although the Belgium team photo was one of the most frequent stickers in their collection, the individual players from Belgium were among the least frequent.

Now, you won’t need me to tell you that this is pretty much a waste of time. With just 5000 stickers collected, it’s bound to be that some stickers occur more often than others, and we don’t learn anything this way about whether stickers of certain teams or players are genuinely more difficult to find than others. One could try to ascertain whether the pattern of results here is consistent with an equal distribution of stickers, but there’s no mention of such an analysis being done, and a sample of just 5000 stickers would probably be too small to reach definitive conclusions anyway.

One fun thing though: with the 5000 stickers collected, these guys managed to complete the album except for one missing sticker: Radja Nainggolan of Belgium.  But he ended up not being selected for the World Cup anyway 😀.

This doesn’t really bring us any closer to the question of whether rare stickers exist or not. One interesting suggestion is to look at frequencies of requests in sites that handle markets for second-hand stickers. And indeed, this site finds jumps in frequencies of requests for certain types of stickers, suggesting such types are rarer than others. However, it’s not necessarily the case that the rare stickers are the ones with the most requests: Lionel Messi might be a popular trade, because… Messi… and not because his sticker is rare. Still, the post is a fun read, with complete details about how you might approach this type of analysis.

Finally, far be it for me to promote the exploits of one of our competitors, but some of the guys at ATASS pooled their world cup collections to address exactly this issue. A complete description of their findings can be found here. In summary, from an analysis of nearly 11,000 stickers, they found evidence that shiny stickers – of which there were 50 in the 2018 World Cup series – are much rarer than standard stickers.

Moreover, the strength of the evidence is completely overwhelming. This is partly because the number of stickers collected is large – 11,000 rather than 5,000 in the study I mentioned above – but also because there are 50 shiny stickers, and all of them occurred with a lower frequency than an average sticker. In fact, overall, shiny stickers occurred at a rate that’s around half that of a normal sticker. Now, if a single sticker isn’t found, it’s reasonable to put that down to chance; but it’s beyond the realms of plausibility that 50 stickers of a certain kind all occurred at below the average rate.

On this basis, Panini’s claim that all stickers are equally likely is completely implausible. The only way it could really hold true, assuming the ATASS analysis to be correct, is if there were variations in the physical distribution of stickers, either geographically or temporally. So, although Panini might produce all stickers in equal numbers, variations in distribution might mean that some stickers were harder to get at certain times in certain places.

That seems unlikely though, and the evidence does seem to point to the fact that shiny stickers in the World Cup 2018 series were indeed harder to find than the others.

In summary: yes, rare stickers exist.