# What a difference a week makes

As of today there have been more than 90,000 confirmed deaths due to Coronavirus in the USA, despite  But how important was the timing of the introduction of nationwide social distancing?

To examine this, Youyang Gu, who is a data scientist from New York, ran an epidemiological model – which incidentally is far more accurate than the US government’s own model – for the epidemic in the US, but under different assumptions about the timings of the introduction of social distancing.

These graphs summarise what happens if restrictions had been introduced a week earlier than actually happened:

And this is what happens if restrictions had been delayed by an additional week, compared to what actually occurred:

So if restrictions had been introduced a week earlier, there would currently have been around 35,000 deaths as of today – some 55,000 fewer than have actually occurred. But delaying things an extra week would have meant around 250,000 current deaths, an increase of 160,000 on the actual value.

Differences in the projections for the numbers of fatalities by August are even greater: in the two scenarios above, the prediction is for around 60,000 and 455,000 deaths respectively. These compare with a forecast of around 135,000 based on the true timings of restrictions.

There’s a well-known saying that a week is a long time in politics. This was never more true than in the midst of the current pandemic.

1. As stressed, these comparisons are inevitably based on model forecasts, not actual numbers, though the model used has proved accurate in tracking the trajectory of the epidemic so far.
2. The comparison is based on US numbers, though the principle of the importance of the timing of response to the epidemic is equally valid elsewhere.
3. The models assume that the restrictions that have been introduced will be maintained in the future. If social distancing is relaxed, it’s likely that the numbers will grow at a faster rate than predicted here.
4. Restrictions by themselves can contain an epidemic, but they cannot make it go away. And, since contagion rates are reduced once more people are infected,  the more successful restrictions are in containing an epidemic, the more vulnerable the population is to further outbreaks once those restrictions are removed.
5. It’s also well-understood that there are costs, both in terms of economics and non-Coronavirus fatalities, to maintaining strong social-distancing measures. A fair comparison should really include these additional costs.

# The politicisation of Statistics

My intention with these posts about Coronavirus has always been to show how Statistics can be used as part of a battery of scientific tools to learn about, understand and even fight the epidemic. I’m conscious though that a number of recent posts – for example, here, here, here, here, and here – have focused on the interplay between politics and Statistics in the UK response to the Coronavirus epidemic. This was unintentional, rather than planned.

As I’ve mentioned before, and many of you will have known already, I live in Italy, which was affected sooner than the UK by the current epidemic. I’ve therefore followed both the science and the government response to the crisis quite closely both here (Italy) and in the UK. There are many similarities, but also quite a few differences, both in the trajectory of the epidemic and in the way the governments have handled things. Without question, Italy has made many mistakes, though it also had less evidence and less time to make decisions. But as a statistician, what strikes me about the UK response is the extent to which Statistics has been used – and misused – as a cover for government action and inaction. If you read the posts linked above, you should get a sense of what I mean, though I also abandoned many other potential posts at the draft stage because I wanted to avoid this blog simply becoming a rant.

I can’t leave this issue without mentioning the latest abuse of Statistics by the UK government however. As you’ll know – discussed here – as part of a daily press briefing, the government included a slide comparing the trajectory of the virus in different countries. In my previous post, I already discussed the fact that some cosmetic changes had been introduced to that particular slide which had the effect of making the UK’s numbers seem less extreme compared to those of other European countries. But since then, the UK numbers have pretty much remained stable, while those of other countries have started to improve, meaning that the UK looks increasingly worse than other European countries. Consequently, as of this week, the UK government has dropped this particular slide from the daily briefings.

Now, you can make a perfectly valid argument – as indeed Professor David Spiegelhalter did – about the utility of detailed cross-country comparisons. And on the basis of that argument, you might reasonably decide that showing a graph that compares country numbers is misleading and choose not to do it. But what you can’t do, unless you are deliberately manipulating Statistics to best suit your purposes, is include the graph when it shows your country in a favourable light, but then stop showing it as soon as it doesn’t. That is a terrible use of Statistics, and arguably pretty poor government as well.

End. Of. Rant.

It’s not just me though:

This is a screenshot from the UK PM’s statement to the nation regarding a roadmap towards ending the current Coronavirus lockdown.

Taken literally the equation is clearly nonsense. As you’ll know, the value of R is somewhere around 1: values slightly smaller than 1 imply the epidemic is decaying, values greater than 1 imply it is growing exponentially. But, even the most pessimistic estimates of R before the lockdown were around 5. On the other hand, the number of current infections is in the several thousands, with large fluctuations from day to day. So, the inclusion of R in this equation is virtually irrelevant, and the Alert Level would oscillate wildly from day to day with the number of infected individuals.

Let’s assume instead that it’s intended that the number of infections is scaled – say by 1000. So, if R is 1 and the number of infections is 2,500, then the Alert Level would be 3.5. But still it doesn’t make much sense. Suppose you managed to eradicate transmission, so that R=0, but you still had 3000 infected in the population. Then the Alert Level would be 3, even though there would be no risk of further infection. Moreover, would an increase of 1 in the value of R be equally serious as an increase of 1000 in the number of infected individuals, as the equation implies? Generally, that would depend on the actual number of infected individuals: having 20,000 rather than 19,000 infected probably won’t alter the course of the epidemic very much, but having R=1.5 rather than R=0.5 most definitely would.

So, any literal interpretation of the slide, even allowing for scaling effects, is completely false. What is presumably intended is that decisions take on determining an Alert Level will be driven by two factors: the current estimated rate of transmission and the current number of infected individuals. By far the most important of these is the rate of transmission, since the nature of exponential growth is that just a few cases will become many thousands in a short period of time if R is bigger than 1. But the number of cases is relevant. Partly because it affects the number of new infected, especially in the short term; but more so because, if the number is sufficiently low, then a policy of containment through testing and contact tracing is feasible.

In summary: if you disregard any literal interpretation of the equation, but regard it as saying that two primary factors need to be considered when determining an appropriate Alert Level for COVID, then it makes some sort of sense. But presenting complex arguments in a way that makes them seem simpler is both patronising and counter-productive.

# Polite request to PM

Of course we should now use other countries to try and learn why our numbers are high

• David Spiegelhalter

Update: David Spiegelhalter also gave an interview yesterday on the Andrew Marr show:

It includes the line that the daily press briefings, which he describes as ‘completely embarrassing’, are:

… not a trustworthy communication of statistics.

# The best and the worst of Statistics

The above graph is included in the following tweet sent by CEA which is the Council of Economic Advisors to the US White House.

The fluctuating black line shows the number of deaths due to Coronavirus per day in the US. The coloured dotted lines are model estimates and predictions produced at different time points. I’m not sure I need to make the points, but:

1. The CEA claim that the mortality curves have “matched the data fairly well” is open to question.
2. Accepting that a model fits well over a period of observed data is no real basis for assuming the model can be extrapolated into the future. The various model predictions here imply there will be zero new deaths in the US from a range of dates between 16th May and 4th August. All serious epidemiological models for the same process would describe such possibilities as somewhere between impossible and negligible.

To be fair to the authors of the IHME model – whose details are available here –  the detailed projections shown here as of 1st May do include measures of uncertainty as per the following graph:

Nonetheless, given that the current state is at best in a state of plateau, and at worst on an upward trajectory, it seems unduly optimistic that the trend will now be for a negative decline that’s almost as fast as the exponential growth in the early phase of the epidemic.

But let’s give this model, which is at least based on epidemiological assumptions, the benefit of any doubt. The CEA graph also includes a so-called “cubic fit” which is the one that leads to an estimate of zero deaths as of 16th May. There are no details as to how this has been obtained, but presumably someone has simply carried out a regression on the data of the black curve using a smooth curve (technically a 3-degree polynomial). But such a curve is bound to go negative at some point. Now if you look carefully at the cubic fit in the CEA graph, there’s a point where the curve changes from dashes to dots. My guess is that someone simply altered the cubic fit so as to stop it going negative. Unfortunately, zero deaths on 16th May is almost as improbable as negative deaths, so they might as well have not bothered.

Anyway, this epidemic has brought out the best and the worst of Statistics. I guess you can work out where the CEA analysis falls on this range.

1. Thanks to Rickie.Reynolds@smartodds.co.uk for showing me the CEA tweet.
2. This cartoon seems relevant right now:

# Target practice

Personally, I think both the setting of targets during this epidemic, and debates about whether targets are genuinely met or have been fudged, are unhelpful distractions from the real issue of tackling the crisis. But just for the record, since Statistics is being dragged into this argument…

Also:

Mr Gove said 76,496 daily tests had been undertaken in the 24 hours up to 9am on May 3. This compares to the 122,347 tests carried out in the 24 hours to 9am on May 1 — the relevant period for when Matt Hancock, the health secretary, had set a deadline of undertaking 100,000.

Source and further details here

Update: again, just for the record… This is the UK government’s own graph on tests as of 5th May:

So, even disregarding arguments about what actually constitutes a test, current levels are significantly below the 100,000 tests per day target whichever definition is used.

# Following the science

There’s been a lot of discussion lately about the efficacy and efficiency of the UK government response to the Coronavirus epidemic. There are many strands to this, but one concerns the speed with which policies of social restriction were introduced. And a lot of the debate has focused on two sporting events that were held shortly after lockdowns were introduced in many European countries, but before they were introduced in the UK: the Cheltenham Festival and the second leg of the Champions League tie between Liverpool and Atletico Madrid.

The Liverpool game was especially controversial because it had been known for some time that Madrid was already a focus for the Coronavirus outbreak in Spain. And while most other Champions League fixtures that were held that week were held behind closed doors, the decision was made to hold the Liverpool game with spectators, including 3000 travellers from Madrid.

The picture from both Cheltenham and Liverpool after each event is concerning, since both locations appear to have a higher rate of infection than would be expected (see here and here). But it will take careful analysis of the data to establish the extent to which these apparent effects can be properly attributed to the associated sporting events, and an even fuller analysis to determine whether the decisions to hold the events were anyway reasonable or not.

One argument, for example, that’s been presented to justify not holding matches behind closed doors is that there may be more transmission if people watch a match in many pubs rather than in a stadium. And in any case, it’s perfectly valid to argue that a higher rate of infection due to holding a sporting event has to be offset against the economic and other social costs of not holding it. So, even if it turns out that the 2 events in question are genuinely likely to have increased infection rates, this doesn’t in itself imply that the decisions to hold both events were wrong.

But here’s the thing… as with all aspects of planning for and responding to events connected with the epidemic, Science – and Statistics – provides a framework for decision making. In particular, it will give predictions about what is most likely to occur if different actions are taken and, in the case of statistical models, most likely also attach probabilities to different possible outcomes, again dependent on the course of actions taken.

Crucially, though, Science will not tell you what to do. It won’t tell you how to balance costs in terms of lives against that in terms of money. Or jobs. Or something else. That’s a political decision. Moreover, ‘Science’ isn’t a fixed static object that unveils itself in uniform and unchallenged forms. There are different sciences, all of which are constantly evolving, and any combination of which might lead to conflicting conclusions. Even different statistical models might not be in complete agreement. Science will help you understand the costs and benefits of actions that are available to you; but you must take responsibility for the choices you make on the basis of that information.

However, I’ve lost count of how many times politicians – especially in the UK – defend their actions by arguing ‘we followed the science’.  Here’s Health Secretary Matt Hancock in defence of the decision to hold the Cheltenham festival:

We followed the scientific advice and were guided by that science.

And here in defence of holding the Champions League cup tie:

This is of course a question for the scientists and what matters now is that people in Liverpool and across the North West get the treatment that they need and get the curve under control.

Neither comment is likely to be completely untrue – it would obviously be outrageous for any government in any situation to completely ignore scientific evidence – but both seem to be distractions from the fact that decision-taking is a political process which balances the various risks and costs involved.

The most Science can do is to provide an assessment of what those risks and costs are.

Here’s Brian Cox’s take on the same argument:

When you hear politicians saying ‘we’re following the science’ then what that means is they don’t really understand what science is. There isn’t such a thing as ‘the’ science. Science is a mindset, it’s about trying to understand nature.

And here’s the full section with video:

# Response

The World Health Organisation officially declared the current Coronavirus outbreak a pandemic on 12 March.  A pandemic is technically defined as:

… new disease for which people do not have immunity spreads around the world beyond expectations…

though this is largely subjective, which is why the declaration for the current outbreak was not made till 12 March. But even before that date, most countries realised the Coronavirus epidemic was already on their doorsteps and needed some kind of response.

But how rapid and how stringent have different countries been in their responses?

This is the subject of a new tracker which monitors how different governments have responded to the crisis according to the number of cases they presently have in their country. Specifically, they define something called a stringency index which records, on a scale of 0 to 100, how stringent a country’s measures are. Full details of the definition of the stringency index and the methodology used are available here. Broadly speaking, the more restrictive and widespread a country’s measures, the greater the value of the index. However, the index does not measure how effective the measures are, nor how strictly they are applied or followed.

The tracker is live, which means it is regularly updated. However, as of 24 March, a summary of the way 6 different countries have responded to the crisis is contained in the following figure:

For each country, time is measured in days since the first case appeared in that country, and the black curve shows the trajectory of the epidemic in terms of number of cases. (Bear in mind though that the number of cases is also related to the number of tests carried out, so direct comparison of these curves across countries may not be entirely valid).

The red dots show the value of the stringency index on the same timescale. You need to look at the right-hand axis to read-off the actual values of the index. For all countries the stringency index has generally risen as the epidemic has grown: countries have responded to the crisis by bringing in measures to control the virus spread. But there are significant differences across the different countries:

• In France and especially Italy, the stringency index follows the trajectory of the epidemic very closely. In other words, governments there have responded quickly to the scale of the epidemic as it has grown.
• In South Korea, where the epidemic has been largely controlled, the stringency measure values increase ahead of the growth of the epidemic. That’s to say, the government has anticipated the growth of the epidemic and brought disease control measures in quickly to stop the epidemic growth before it occurred.
• The United Kingdom’s first use of restrictive measures was very slow, and they have since been playing catch-up relative to the size of the epidemic.
• In the US, there was almost no attempt at control until long after the start of the epidemic. Belatedly, more stringent measures have been applied, but these are still substantially less restrictive than those of France or Italy.
• China’s pattern is more complicated. Since they were the first country affected by the outbreak, it’s perhaps understandable that their initial response was slow. Their subsequent response was rapid, though, enabling a subsequent reduction in stringency, which has more recently been raised again – presumably in an attempt to prevent a second wave of the epidemic. The maximum stringency index is considerably lower than that of France or Italy, presumably because although their measures were more restrictive, they were localised in severity to the hardest-hit province of Hubei.

One might quibble about the actual definitions used for the stringency index, but these conclusions broadly chime with common perceptions about the efficacy of different government responses to the epidemic.

# #iorestoacasa

“Io resto a casa” translates as “I’m staying home” and is the latest message of solidarity against Coronavirus here in Italy. As you’ll know, Italy went into lockdown a couple of weeks before the UK. Based on model and expert predictions, we should start to see some improvement in numbers round-about now. Actually, numbers did improve for a couple of days, but the most recent numbers have suggested stability, rather than a downturn. Some random variation in numbers is inevitable, even if the trend now is for things to get better, but still it’s disheartening when numbers don’t improve as quickly as you’d hoped.

With this in mind, although the models show that the reduction in transmission rates that result from a lockdown will actually result in an improvement, what evidence is there that this approach works in practice?

In part, there’s the evidence from China, who managed to bring their epidemic under almost total control in a fairly short space of time. Given both the size and spread of the population in China, this has been an incredible achievement. But the social-distancing and quarantining methods used there were considerably more restrictive than those used in western countries, so how can we be sure that the measures applied in Italy and the UK will have a similar impact?

I also gave an answer to this in a previous post, which compared the trajectory of the epidemic in two provinces of Lombardia – Lodi, which introduced an early lockdown and Bergamo which did so much later – and showed that an early lockdown led to a much flatter subsequent trajectory of the epidemic.

But there’s similar statistical evidence also available from earlier epidemics. An academic paper published in 2007 by  Markel et al. compared the trajectory of the 1918-19 influenza epidemic in different cities of the United States, relating the trajectories to the methods of social control used to limit the epidemic, which were generally different from city to city. The following graph, for example, shows how the time to introduce social control measures affected the overall number of fatalities. Generally, the quicker the response time (d), the fewer the fatalities.

The following sets of graphs are also relevant in understanding how the timing and nature of social interventions impacted on the trajectory of the epidemic in 4 different cities (the ones marked with a solid black dot in the graph above). The curve shows the number of excess deaths per 100,000 of population as time progresses. The triangle in each case shows the date of the first identified case in that city. The horizontal bars beneath each graph show the periods in which each type of intervention was applied.

The main conclusions are:

• Each of the cities applied social restrictions of one sort or another, and each was successful in bringing the epidemic under control;
• St Louis and Denver relaxed some restrictions soon after an improvement in numbers, but then had a second peak and had to re-introduce them;
• New York chose not to close schools, but had a higher peak than St Louis and Denver. On the other hand, by not relaxing the restrictions that they did apply, they had only a very slight second peak.
• Denver and St Louis had quite similar strategies, but Denver’s were introduced later (see previous figure) and consequently had higher peaks and almost double the number of fatalities. This emphasises the importance of timing.
• Pittsburgh’s restrictions were more limited and introduced later. As a result, their peak and total number of excess fatalities was greater than the other cities, though they also avoided a second peak.

So, it’s not just about models. There is hard statistical evidence that social restrictions do work, and that fine tuning the timing and nature of interventions will have an effect on the trajectory of an epidemic. Of course, no two epidemics – or indeed countries – are identical, so what happened in the 1918-19 flu epidemic in the US won’t repeat identically for the current epidemic in the UK or Italy. But, the evidence is that social controls do work; that the type of controls applied can make a difference; and that timing is also critically important.

In summary, it might not always be easy to “stare a casa” – stay at home – but if everyone follows the rules of restriction, the effect on the course of the epidemic will be dramatic.

Though this post is based on the work by Markel et al., I also drew material from this summary article by Alex Tabarrok.

# Andrà tutto bene

There’s been a lot of discussion this weekend about the approach proposed by the UK government for handling the Coronavirus epidemic and how it compares to the approach adopted by most other countries so far. The best explanation I’ve seen of the UK approach is contained in a thread of tweets by Professor Ian Donald of the University of Liverpool. The thread starts here:

A strong counterargument setting out arguments against this approach is given here.

I’m in no position to judge whether the UK approach is less or more risky than that adopted by, say, Italy, who have taken a much more rigorous approach to what has quickly become known as ‘social-distancing’, but which roughly translates as closing down everything that’s non-essential and forcing people to stay at home.

However, there is one essential aspect about the UK strategy which seems a little mysterious and which I thought I might be able to shed a little light on with some Statistics.

You’re no doubt familiar by now with the term ‘herd immunity‘, though the phrase itself seems to have become a bit of a political hot potato. But whatever semantics are used, the basic idea is that once enough people in a population have been infected with the virus and recovered, the remainder of the population is also protected from further epidemic outbreaks. Why should that be so?

It’s nothing to do with virology or biology – antibodies are not passed from the previously infected to the uninfected – but is entirely to do with the statistical properties of epidemiological evolution. I’ll illustrate this with a much simplified version of a true epidemic, though the principles carry over to more realistic epidemiological models.

In a previous post I discussed how the basic development of an epidemiology in its initial exponential phase can be described by the following quantities:

• E: the expected number of people an infected person is exposed to;
• p: the probability an infected person will infect a person to whom they are exposed;
• N: the number of people currently infected.

The simplest epidemiological model then assumes that the number of new infections the next day will be

$E \times p \times N$

We’ll stick with that, but I want to make a slightly different assumption from that made in the video. In the video, when someone is infected, they remain infected indefinitely, and so are available to make new infections on each subsequent day. Instead, I want to assume here that a person that’s infected remains infected only for one day. After that they either recover and are immune, or, er, something else. But either way, they remain infective only for one day. Obviously, in real life, the truth is somewhere between these two extremes. But for the purposes of this argument it’s convenient to assume the latter.

In this case, if we start with N cases, the expected number of cases the next day is

$E \times p \times N$

The next day it’s

$(E \times p)^2 \times N$

And after x days it’s

$(E \times p)^x \times N$

This means that we still get exponential growth in the number of cases whenever $E \times p$ is greater than 1; in other words, whenever an infected person will pass the virus on to an average of more than one person. But, critically, if $E \times p$ is less than 1, $(E \times p)^x \times N$ approaches zero as x grows and the epidemic dies out.

Here are some simulated trajectories. I’ve assumed we’re already at a point where there N=1000 cases and that the next day’s observations are a random perturbation around the expected value. First, let’s assume $E \times p =1.05$ – so each infected person infects an average of 1.05 other people daily. The following graphs correspond to four different simulated trajectories. If you look at the values of the counts, each of the simulations is quite different due to the random perturbations (which you can’t really see). But in each case, the epidemic grows exponentially.

But now suppose $E \times p =0.95$, so each infected individual infects an average of just 0.95 people per day. Again, the following figure shows four different simulations, each again different because of the randomness in the simulations. But now, instead of exponential growth, the epidemic tails off and essentially dies out.

This is crucially important: when  $E \times p$ is below 1, meaning infected people infect less than one other person on average, the epidemic will just fade away. Now, as discussed in the previous post, changes to hygiene and social behaviour might help in reducing the value of $E \times p$, but unless it goes below 1, the epidemic will still grow exponentially.

But, suppose a proportion Q of the population is actually immune to the virus. Then an infected person who meets an average of E people in a day, will now actually meet an average of just $E \times (1-Q)$ people that are not immune. So now the number of new infections in a day will be $E \times (1-Q)\times p$, and as long as $E \times (1-Q)\times p$ is smaller than 1, the epidemic will tail off.

This is the basis of the idea of ‘herd immunity’. Ensure that a large enough proportion Q of the population is immune, so that the average number of people an infected person is likely to infect is less than 1. This is usually achieved through vaccination programs. By contrast, and in the absence of a vaccine, the stated UK government approach is to achieve a large value of Q by letting the disease spread freely within the sub-population of people who are at low risk of developing complications from the disease, while simultaneously isolating more vulnerable people. So, although many people will get the disease initially – since there is no herd immunity initially – these will be people who are unlikely to require long-term hospital resources. And once a large enough proportion of the non-vulnerable population has been infected, it will then be safe to put the whole population back together again as the more vulnerable people will benefit from the herd immunity generated in the non-vulnerable group.

Can this be achieved? Is it really possible to separate the vulnerable and non-vulnerable sections of the population? And will the spread of the disease through the non-vulnerable sub-population occur at the correct rate: too fast and hospitals can’t cope anyway (some ‘non-vulnerable’ people will still have severe forms of the disease); too slow and the ‘herd immunity’ effect will itself be too slow to protect the vulnerable section once the populations are re-combined. As explained in the thread of tweets above, the government has some control on this rate through social controls such as school closures and so on. But will it all work, especially once you factor in the fact that many non-vulnerable people may well take forms of action that minimise their own risk of catching the virus?

I obviously don’t have answers to these questions. But since I’ve found it difficult myself to understand from the articles I’ve read how ‘herd immunity’ works, I thought this post might at least clarify the basics of that concept.

‘Andrà tutto bene’ translates as ‘everything will be ok’, and has been adopted here in Italy as the slogan of solidarity against the virus. The picture at the top of the page is outside the nursery just up the road from where I live. As you walk around there are similar flags and posters outside many buildings and people’s houses. Feel free to print a copy of my picture and stick it on the office door.