The response to Covid-19 in the UK, the US and other countries was shaped by the dramatic headlines in mid-March, suggesting 550,000 deaths in the UK and 2.2 million in the US.
Professor Neil Ferguson, who led the COVID-19 modeling team at Imperial College in London, resigned May 5 from his government advisory role after breaking the very same British lockdown rules that he had a role in influencing.
Ferguson led the Imperial College team that designed the computer model that, among others, had been used to justify the recent stay-at-home orders in England as well as in the United States. We now know the model was so highly flawed it never should have been relied upon for policy decisions to begin with.
Epidemiology—the study of the incidence, prevalence, and impact of disease—frequently calls upon models to forecast potential outcomes of diseases. Not surprisingly, once COVID-19 became a pandemic, policy experts from all across the world began relying on such models.
The model also predicted the United States could incur up to 1 million deaths even with “enhanced social distancing” guidelines, including “shielding the elderly.” Imperial’s modeling results influenced British Prime Minister Boris Johnson to impose a nationwide lockdown and influenced the White House as well.
Kevin Dayaratna, Ph.D.
asked Ferguson and his colleagues for their model on multiple occasions to see how they got their numbers, but they never replied to the emails sent.
According to Nature, they had been “working with Microsoft to tidy up the code and make it available.” Kevin also asked the U.S. Centers for Disease Control and Prevention for the codes it used to develop its COVID-19 forecasts, but got no response.
So, his colleague Norbert Michel and Kevin decided to take a publicly available COVID-19 epidemiological model and forecast the prevalence and mortality of the disease under a variety of plausible scenarios.
The results varied, depending on the assumptions we made about mortality rates within hospital intensive care units, asymptomatic rates, and the specification of the R0 (pronounced R-naught) value, which measures how easily the virus spreads.
They found mortality rate predictions can be quite variable depending on the age and comorbidities of those contracting the virus.
Under varying assumptions regarding a mortality rate in intensive care units between 5% and 30%, they found that predicted mortality because of the disease could range from near 78,000 deaths to as many as 810,000 deaths in the U.S. by Aug. 1.
Recent testing data indicates that the asymptomatic rate for COVID-19 is likely not trivial, and data from Iceland indicates this rate can be as high as 50%. Assuming an asymptomatic rate ranging from 15% to 55%, one can project deaths in the U.S. of between 118,000 and 394,000 by Aug. 1.
Lastly, we looked at the model’s assumption about the virus’ basic reproductive number, the aforementioned R0 value. Popularized in the 2011 movie “Contagion,” the R0 value quantifies the average number of people who will get the virus from someone who is an infected.
Under assumptions of the R0 value ranging from 1.5 to 3.5—plausible estimates based on medical research as discussed in our paper—the model predicted from 44,000 dead to 1.1 million dead by Aug. 1 in the U.S.
According to the Johns Hopkins University coronavirus tracker, we are currently over 83,000 deaths, which exceeds our lower-end estimates. But the point our research made is that these types of models produce many plausible scenarios, depending on reasonable assumptions.
As we learn more about the new coronavirus, it is imperative to continue to update the assumptions used in these models.
After we had published our work, news surfaced that Microsoft had actually made some headway in making the Imperial College team’s model available. But the codes it released are a highly modified version of what the Imperial team actually used.
And, it turns out, the model has serious flaws, which a former software engineer from Google discusses at length in his blog.
The Imperial College code provides different answers using the same inputs. In particular, the same assumptions can provide results that differ by 80,000 deaths over a span of 80 days. The software engineer has noted there are apparently myriad other problems as well—including undocumented codes and numerous bugs.
This isn’t the first time bad models have made their way into policy. As they discussed in their work, statistical models can be useful tools for guiding policy, but they are only as credible as the assumptions on which they are based.
It is fundamentally important for models used in policy to be made publicly available, have assumptions clearly stated, and have their robustness to changes to these assumptions tested. Models also need to be updated as time goes on in line with the best available evidence.
Bottom line: The Imperial College model didn’t meet any of these criteria. And sadly, its model was one of the inputs relied on as the basis for locking down two countries.
The codes we used at The Heritage Foundation are available here. Their assumptions are clearly stated in the paper here.
Faced with widely publicised, alarming figures, as demonstrated by Imperial College’s Professor Neil Ferguson.
Governments were forced to react with the unprecedented lockdown to suppress Covid-19. No one looked at his ten years of predictions that were wrong.
The results of his previous models produced wildly inaccurate results:
The results of his previous models produced wildly inaccurate results:
the prediction of 200 million deaths worldwide from bird flu in 2005, when just 282 people died between 2003 and 2009, without locking down economies.
That model had serious flaws.
He used an undocumented, highly complex, 13-year-old computer code for a feared influenza pandemic.
Ferguson led the Imperial College team that designed the computer model that, among others, had been used to justify the recent stay-at-home orders in England as well as in the United States. We now know the model was so highly flawed it never should have been relied upon for policy decisions to begin with.
Epidemiology—the study of the incidence, prevalence, and impact of disease—frequently calls upon models to forecast potential outcomes of diseases. Not surprisingly, once COVID-19 became a pandemic, policy experts from all across the world began relying on such models.
The model also predicted the United States could incur up to 1 million deaths even with “enhanced social distancing” guidelines, including “shielding the elderly.” Imperial’s modeling results influenced British Prime Minister Boris Johnson to impose a nationwide lockdown and influenced the White House as well.
Kevin Dayaratna, Ph.D.
asked Ferguson and his colleagues for their model on multiple occasions to see how they got their numbers, but they never replied to the emails sent.
According to Nature, they had been “working with Microsoft to tidy up the code and make it available.” Kevin also asked the U.S. Centers for Disease Control and Prevention for the codes it used to develop its COVID-19 forecasts, but got no response.
So, his colleague Norbert Michel and Kevin decided to take a publicly available COVID-19 epidemiological model and forecast the prevalence and mortality of the disease under a variety of plausible scenarios.
The results varied, depending on the assumptions we made about mortality rates within hospital intensive care units, asymptomatic rates, and the specification of the R0 (pronounced R-naught) value, which measures how easily the virus spreads.
They found mortality rate predictions can be quite variable depending on the age and comorbidities of those contracting the virus.
Under varying assumptions regarding a mortality rate in intensive care units between 5% and 30%, they found that predicted mortality because of the disease could range from near 78,000 deaths to as many as 810,000 deaths in the U.S. by Aug. 1.
Recent testing data indicates that the asymptomatic rate for COVID-19 is likely not trivial, and data from Iceland indicates this rate can be as high as 50%. Assuming an asymptomatic rate ranging from 15% to 55%, one can project deaths in the U.S. of between 118,000 and 394,000 by Aug. 1.
Lastly, we looked at the model’s assumption about the virus’ basic reproductive number, the aforementioned R0 value. Popularized in the 2011 movie “Contagion,” the R0 value quantifies the average number of people who will get the virus from someone who is an infected.
Under assumptions of the R0 value ranging from 1.5 to 3.5—plausible estimates based on medical research as discussed in our paper—the model predicted from 44,000 dead to 1.1 million dead by Aug. 1 in the U.S.
According to the Johns Hopkins University coronavirus tracker, we are currently over 83,000 deaths, which exceeds our lower-end estimates. But the point our research made is that these types of models produce many plausible scenarios, depending on reasonable assumptions.
As we learn more about the new coronavirus, it is imperative to continue to update the assumptions used in these models.
After we had published our work, news surfaced that Microsoft had actually made some headway in making the Imperial College team’s model available. But the codes it released are a highly modified version of what the Imperial team actually used.
And, it turns out, the model has serious flaws, which a former software engineer from Google discusses at length in his blog.
The Imperial College code provides different answers using the same inputs. In particular, the same assumptions can provide results that differ by 80,000 deaths over a span of 80 days. The software engineer has noted there are apparently myriad other problems as well—including undocumented codes and numerous bugs.
This isn’t the first time bad models have made their way into policy. As they discussed in their work, statistical models can be useful tools for guiding policy, but they are only as credible as the assumptions on which they are based.
It is fundamentally important for models used in policy to be made publicly available, have assumptions clearly stated, and have their robustness to changes to these assumptions tested. Models also need to be updated as time goes on in line with the best available evidence.
Bottom line: The Imperial College model didn’t meet any of these criteria. And sadly, its model was one of the inputs relied on as the basis for locking down two countries.
The codes we used at The Heritage Foundation are available here. Their assumptions are clearly stated in the paper here.
Comments
Post a Comment