What is wrong with the model which prompted lockdowns? An interview with Andrea Molle
As the spread of COVID19 progressed rapidly in Europe at the beginning of 2020, the pandemic predictive model devised by the Imperial College in London featured prominently in public policies and debates. Its extremely worrying statistics prompted lockdowns.
Now, it’s under intense scrutiny.
Researcher Andrea Molle (Chapman University, California), who covered several aspects of COVID19 in a series of previous articles, explains what is wrong with it and the way it entered public debate.
What do you make of the criticism surrounding the model and its author, Professor Neil Ferguson?
I would not go as far as Elon Musk and call him a fake. That was wrong and unwarranted. He is undoubtedly an impressive and reputable scholar. But Prof. Ferguson also has a record of excessively pessimistic predictions and mortality overestimations. For example, look at his work on the 2001 foot and mouth epidemic up to his recent incorrect prediction of 2009 swine flu mortality. They were both calling for draconian measures but ended up in nothing.
How do you evaluate the model itself?
First, I would like to start by saying that I believe that models should not be looked at as infallible predictions but as tools to explore probabilistic scenarios. If we look at a model as a prophecy, like we have done so far with COVID-19, then we end up fulfilling it hoping to avoid an even more catastrophic outcome. The issue I have with this model, in particular, is not with the model per se. But, the ease with it was assumed as the “golden standard” to inform global suppression, and mitigation strategies, is frankly unacceptable.
In my opinion, it was a decision based more on the reputation of the Imperial College than its applicability as a policy tool, which is quite frankly very limited.
Why do you believe so?
Mainly because the model is too simplistic to inform policies. I do like an elegant model, but the complexities of this pandemic call for more sophisticated approaches. Without getting too technical, Ferguson’s model is based on an assumption of an R0, the linear reproduction number, which is assumed to be randomly distributed in the population. This assumption, otherwise legitimate, ignores something we need to know in estimating a contagion: how social and cultural factors determine the spread of contagion in modern societies, making a huge difference. Also, the model is, quite frankly, unsound to inform policies for it cannot be empirically tested.
The model became the golden standard because it was a convenient choice for politicians looking for a magic bullet approach that minimized their responsibilities and the potential fallout.
Can you elaborate on that?
Let me give you an example. Imagine a scenario where I am telling you, “don’t go outside, or you are going to die because the rain is going to kill you.” Faced with such an extreme outcome, you rationally decide to stay in and eventually survive. Can I use this as proof that I was right and the rain kills? Not at all. Would 10,000 people not going out and not getting killed good enough to prove my statement? Again, it is not enough. It is not a matter of numbers. I have to prove that, first, people by sending people outside to die; second, that they die because of the rain and, lastly, that there is nobody (of course, allowing a certain level of tolerance) that goes outside, get wet, and survives. How can I test all of that without risking people’s lives? And how about testing the use of a raincoat, or an umbrella, with a subset of the population? Going back to our suppression and mitigation strategies, informed by Fergusson’s model, no one took the risk of doing nothing and saw what would happen: if the model were empirically sound. And even if some countries, like Sweden or Japan, did something close to that, supporters of the dominant narrative do not consider it enough of evidence that the model was wrong. And rightfully so.
On the other hand, because in many countries we have fewer numbers of what the models predicted in its worst-case scenario, that is automatically taken as evidence that the model is 100% correct. All of it without excluding alternative explanations, which is a must in addressing causal hypothesis. A typical way of reasoning of what we call an “unfalsifiable” theory, in other words, an unscientific one. I don’t believe this was Ferguson’s intention, but it is how the model entered the political debate.
The model ignores something we need to know in estimating a contagion: how social and cultural factors determine the spread of contagion in modern societies, making a huge difference.
Is the model wrong?
I am not saying it was “wrong.” A model is not wrong or right; it is consistent with its assumptions and the data used to fit it. I am simply saying that there is no way to tell if its predictions were valid. The model is designed, or at least presented, in a way that does not allow itself to be tested. Of course, I am not implying that social distancing is not essential and should not be carried out. It does matter, and we know it from the epidemiological literature. But there are different ways to apply it, and we should have also considered other models, and opinions, before committing ourselves to such an extreme strategy as an indiscriminate lockdown under any circumstances.
So why did it become the golden standard?
Because its outcomes, and the inferred policies, are simple to understand and, politically speaking, safe to implement. It was a convenient choice for politicians looking for a magic bullet approach that minimized their responsibilities and the potential fallout.
The problem is that there is no magic bullet, and decisions should be tailored to each country’s specific context. Even more so, areas within countries should have different strategies based, among other things, on their population and network structures. An increasing number of scientific publications, based on real data, is showing that some are might need a full lockdown, whereas others might do better with targeted closures. Also, there are now better mathematical and computational models that account for these social, economic, and cultural differences. Still, they are not as immediate as Ferguson’s to understand. And not as simple to turn into policies to fight COVID-19 without taking a huge political risk. Unfortunately, adopting Ferguson’s model without asking for a second opinion or double-checking, it might have been a wrong choice.
The consequence of adopting an extreme, blanket, strategy is that there is now a mounting pressure to abandon any form of social distancing completely.
Are you referring to the accusation that the model was badly coded? What if the model is, like it is suggested now, faulty and full of mistakes?
I was not directly referring to that. If those accusations, however, are proven right, it will turn up to be a huge scandal, which will potentially cause massive lawsuits. More importantly, it will start an unprecedented wave of mistrust in, if not open hostility towards, science and politics. After all it could be considered partially responsible for the dare social and economic consequences of the pandemic.
In what sense then. And what conclusions can we draw?
The consequence of adopting an extreme, blanket, strategy is that there is now a mounting pressure, fueled by the long-term unsustainability of the current approach, to abandon any form of social distancing completely. Politicians will eventually succumb to the pressure, and this would potentially lead to a dire scenario. Ironically, that might provide the data we need to validate or disprove, Ferguson’s model.