Table of contents
- About the Model
- How Our Model is Different
- Historical Performance (Updated Jun 1)
- Concerns with the IHME model (Updated Daily)
- Data and Output
- Historical US Projections (Updated Daily)
- Government/Media Coverage
- Who We Are
About the Model
Our COVID-19 prediction model adds the power of artificial intelligence on top of a classic infectious disease model. We developed a simulator based on the SEIR model (Wikipedia) to simulate the COVID-19 epidemic in each region. The parameters/inputs of this simulator are then learned using machine learning techniques that attempts to minimize the error between the projected outputs and the actual results. We utilize daily deaths data reported by each region to forecast future reported deaths. After some additional validation techniques (to minimize a phenomenon called overfitting), we use the learned parameters to simulate the future and make projections.
The goal of this project is to showcase the strengths of artificial intelligence to tackle one of the world’s most difficult problems: predict the track of a pandemic. Here, we use a pure data-driven approach by letting the machine do the learning.
We are currently making projections for: the United States, all 50 US states (plus DC, PR, VI, Guam) and 63 countries (including all 27 EU countries). Combined, these 64 countries account for 99% of all global COVID-19 deaths.
See an analysis of our model by Dr. Carl T. Bergstrom, Professor of Biology at the University of Washington.
Click here to read a more in-depth description of how our model operates.
Back to Top
How Our Model is Different
No public funding: We are the only model used by the CDC that receives no public funding, making us a completely independent entity.
Accounts for reopenings: We are also one of the only models used by the CDC that factors in individual state-by-state or country-by-country reopenings, allowing us to make more realistic projections.
Daily updates: Because our model is purely data-driven, it is quick to run and easy to regenerate. Unlike other models that are only updated once every few days, our model is updated on a daily basis, leading to more accurate projections.
Realistic simulations: Unlike other models that try to create complex mathematical equations to “fit a curve” or estimate the growth rate, we try to simulate the disease exactly how they progress in reality: we start off with the entire population in a region, then on each day a certain proportion becomes infected, and those individuals spread the infections to others, and so forth. This makes our model easy to interpret and understand.
Flexibility to create scenarios: Because of model’s realistic and flexible properties, we are able to generate varoius hypotheticals, such as what happens if everyone began social distancing one week earlier or one week later. We have also generated hypotheticals on what would happen in each region if there are no reopenings. A model that simply uses a curve fitting function or tries to track the growth rate will not be able to generate such hypotheticals.
Full disclosures of assumptions/limitations: We describe our assumptions and limitations in the sections below in order to be transparent about what our model can and cannot do. This is something we encourage all other models to provide in a clear manner.
Minimal assumptions: Because our model uses machine learning to learn the inputs and parameters, we minimize the number of assumptions we have to introduce. This allows us to avoid certain biases that can be present when incorporating various assumptions.
Region-agnostic: Our model is agnostic to the region, enabling us to make projections for all 50 US states (plus DC, PR, VI, Guam), 30+ US counties, and 70+ countries. To our best knowledge, this is the most comprehensive model in terms of coverage. Furthermore, unlike other models, we do not require accessory data such as mobile phone data, case data, or temperature data. The only input data we require is the daily death reports from Johns Hopkins. Due to our machine learning layer, we also do not require manual tuning for each region, allowing us to focus our time on improving our projections.
Estimating testing targets: Because our model keeps an estimate of the number of newly infected individuals each day, we can use this estimate to determine a how many tests each region should ideally perform each day. We base our estimates on Harvard Global Health Institute’s study that assumes 10 contacts per infected individual. You can download our estimates here.
Learning the reproduction number (R): One of the most important properties for any infectious disease is the basic reproduction number, known as R0. Rather than pre-setting this value based on assumptions, our model is able to learn the value that most closely matches the data. For Italy, the R0 is found to be around 2-2.2, while for New York state, the R0 is 3.4-3.8. This means that on average, an infected person in New York will infect 3.4 to 3.8 additional people. For most regions, the R0 is found to be around 2, which matches the WHO findings. We are able to generate a plot of how the R value changes over time for all of our projections. To see our estimates of R values for every state and country, see our Infections Tracker page.
Learning the infection fatality rate (IFR): Rather than rely on various non-consensus studies on the infection fatality rate (IFR), our model can also learn the best value for the IFR in each region. For example, our model determined that the IFR in the United States is around 1%. This is largely consistent with what scientists have found, despite the fact that the case mortality rate is much higher (e.g. Italy is at 13-14%).
Learning when people started social distancing: It turns out that many people began social distancing before a region’s formal lockdown order is issued. Our model is able to learn the exact dates when people in a region started social distancing, which are often independent of the stay-at-home orders. For example, in New York, this inflection point is determined to be around March 14, which closely matches the NYC subway ridership data. For the US as a whole, we estimate that date to be around March 18. You can see what happens if everyone in the US reacted one week earlier (March 11) or one week later (March 25).
Open data: We upload all of our raw data/projections daily onto our GitHub page. All of the data used on this website can be downloaded.
Strong validation system: Many of the other models tend to overfit to the data. We have a strong validation system to make sure that all of our updates pass out-of-sample validation before they can be included in the model. This allows us to better differentiate the signal from the noise and be more resistant to outliers. Because all of our assumptions and projections are tested/verified on all 50 states as well as over 70 countries, we are able to create more robust projections.
Back to Top
Last Updated: Jun 1
A model isn’t very useful if it’s not accurate. Below is our analysis on how various models considered by the CDC have performed over the past few weeks. Because the CDC receives weekly projections from every Monday, we use projections from past Mondays to evaluate the models.
Click here to see performance evaluations for past dates.
May 30 evaluation of state-by-state projections
May 30 evaluation of US projections
- We are one of only two models that beats the baseline every week (the other is LANL).
- A baseline model that simply uses the previous week’s average deaths to make future projections outperforms many models for short-term forecasts.
- The IHME model, a model frequently cited by the White House and media, consistently performs in the bottom half of all models for both its US projections and state-by-state projections. The model also frequently fails to beat the baseline model.
- The COVIDhub ensemble model is created by taking a combination of all eligible models that submit projections to the CDC. Our projections are included in this ensemble.
- For state-by-state projections, we evaluate all models that have 4+ week projections for more than 40 states. For models with missing state projections, we use the mean projection for that state (among all the models).
- While past performance is not necessarily indicative of future performance, we believe it’s important to consider a model’s historical accuracy and not just a model’s future forecasts and/or the creator’s name recognition. It is also important to make sure that a model can perform better than the baseline.
- We welcome and encourage independent model evaluations. See here for an evaluation from a PhD data scientist at NASA Ames.
Projections taken from: https://github.com/reichlab/covid19-forecast-hub
Truth data from Johns Hopkins: https://github.com/CSSEGISandData/COVID-19
Back to Top
Concerns with the IHME model
In this section we will compare our projections with a popular model developed by the Institute for Health Metrics and Evaluation (IHME) and commonly referred to by the White House and media. Below, we compare a sample of our past projections (C19Pro) with IHME for US, New York, Michigan, New Jersey, California and Italy, some of the most heavily impacted regions.
As you can see from the plots above, IHME’s projections failed to accurately capture the true trajectory for these regions. Our projections, meanwhile, have been significantly more accurate. Below, we will go into further details as to why IHME is a flawed model.
There are existing news articles such as Vox, STAT News, CNN, and Quartz that agree with our concerns.
In the words of Ruth Etzioni, an epidemiologist at Seattle’s Fred Hutchinson Cancer Research Center, “that [the IHME model] is being used for policy decisions and its results interpreted wrongly is a travesty unfolding before our eyes.”
Back to Top
May 4 Revision
On May 4, IHME completely overhauled their previous model and increased their projections from 72k to 132k US deaths by August. Whereas they were previously underprojecting, they are now overprojecting the month of May. At the time of their new update on May 4, there were 68,919 deaths in the US. They projected that there will be 17,201 deaths in the week ending on May 11. In fact, there were only 11,757 deaths. IHME overshot their 1-week projections by 43%. Meanwhile, we projected 10,676 deaths from May 4 through May 11, an error of less than 10%.
IHME went from severely underprojecting their estimates to now overprojecting their estimates, as you can see in the below comparison of May 4 projections. Furthermore, as recently as May 12, they were still projecting 0 deaths by August 4. Their model should not be relied on for accurate projections.
Back to Top
Sample Summary of IHME Inaccurate Predictions
In their April 15 projections, the death total that IHME projected will take four months to reach was in fact exceeded in six days:
|April 21 Total Deaths||IHME Aug proj. deaths from Apr 15||Our Aug proj. deaths from Apr 15|
As you can see above, their models made misguided projections for almost all of the worst impacted regions in the world. The most alarming thing is that they continue to make low projections. Below is their projections from April 21. All of the below projections were exceeded by May 2, just a mere 11 days later:
|May 2 Total Deaths||IHME Aug proj. deaths from Apr 21||Our Aug proj. deaths from Apr 21|
As scientists, we update our models as new data becomes available. Models are going to make wrong predictions, but it’s important that we correct them as soon as new data shows otherwise. The problem with IHME is that they refused to recognize and update their wrong assumptions for many weeks. Throughout April, millions of Americans were falsely led to believe that the epidemic would be over by June because of IHME’s projections.
On April 30, the director of the IHME, Dr. Chris Murray, appeared on CNN and continued to advocate their model’s 72,000 deaths projection by August. On that day, the US reported 63,000 deaths, with 13,000 deaths coming from the previous week alone. Four days later, IHME nearly doubled their projections to 135,000 deaths by August. One week after Dr. Murray’s CNN appearance, the US surpassed his 72,000 deaths by August estimate. It seems like an ill-advised decision to go on national television and proclaim 72,000 deaths by August only to double the projections a mere four days later.
Unfortunately, by the time IHME revised their projections in May, millions of Americans have heard their 60,000-70,000 estimate. It may take a while to undo that misconception and undo the policies that were put in place as a result of this misleading estimate.
Back to Top
As of April 11, IHME projected 225 (0 - 1,180) deaths in the US from June 1 to August 4. While we hope the US only has 225 total deaths from June to August (an average of 3 deaths per day), we believe this is an underestimate.
New data is extremely important when making projections such as these. That’s why we update our model daily based on the new data we receive. Projections using today’s data is much more valuable than projections from 2-3 days ago. However, due to certain constraints, IHME is only able to update their model 1-2 times a week: “Our ambition to produce daily updates has proven to be unrealistic given the relative size of our team and the effort required to fully process, review, and vet large amounts of data alongside implementing model updates.”
Back to Top
On April 17, IHME stated that they are incorporating new cell phone mobility data which indicate that people have been properly practicing social distancing: “These data suggest that mobility and presumably social contact have declined in certain states earlier than the organization’s modeling predicted, especially in the South.” As a result, IHME lowered their projections from 68k deaths to 60k deaths by August. Their critical flaw is that they assume a linear relationship between lower mobility and lower infection - this is not the case.
Most transmissions do not happen with strangers, but rather close contacts. Even if you reduce your mobility by 90%, you do not reduce your transmission by 90%. The data from Italy shows that it only reduces by around 60%. That’s the difference between 20k and 40k+ deaths. IHME was likely making the wrong assumption that a 90% reduction in mobility will decrease transmission by 90%. Here is a compilation from infectious disease expert Dr. Muge Cevik showing that household contacts were the most likely to be infected.
We posted a Tweet on April 11 about MTA (NYC) and BART (Bay Area) subway ridership being down 90% in March. However, the deaths have only dropped around 25% in NY, while CA has yet to see sharp decrease in deaths in April, more than a month after the drop in ridership.
Interestingly, after IHME suddenly revised their projections from 72k to 130k on May 4, the director of IHME offered this explanation for why they raised their estimates: “…we’re seeing just explosive increases in mobility in a number of states that we expect will translate into more cases and deaths.” This is directly contradictory to their press release just 2 weeks earlier stating that mobility has been lower than predicted. Any 2-week differences in mobility should not explain this sudden jump in projections - only a flawed methodology would.
Back to Top
State Reopening Timeline
In their April 17 press release, IHME released estimates of when they believe each state will have a prevalence of fewer than 1 case per 1 million. They noted that 35 states will reach under 1 prevalent infection per 1 million before June 8, and that “states such as Louisiana, Michigan, and Washington, may fall below the 1 prevalent infection per 1,000,000 threshold around mid-May.”
As of May 15, Louisiana, Michigan, and Washington are reporting 30-90 confirmed cases per million each day. Furthermore, prevalent infections are 5-15x higher than reported cases since most cases are mild and thus not tested/reported. As a result, we estimate Louisiana and Michigan to have around 7,000 prevalent infections per million, which is 7,000 times higher than IHME’s April 17 estimates. An analysis for many of the remaining states show a similar high degree of error. Hence, IHME’s estimates have been off by a factor of more than 3 orders of magnitude.
Unfortunately, it is likely that many individuals and policy-makers used IHME’s misguided reopening timelines to shape decisions with regards to reopening. Their reopening timelines were picked up and widely disseminated by many media outlets, both local and national. Any policies guided by these estimates can have repercussions weeks and months down the road.
Back to Top
May 4 Update: IHME completely overhauled their previous model to now use an SEIR model. Our model is based in SEIR and that has not changed since we first began making projections on April 1.
On top of everything we mentioned above, their model is also inherently flawed from a mathematical perspective. They try to model COVID-19 infections using a Gaussian error function. The problem is that the Gaussian error function is by design symmetric, meaning that the curve comes down from the peak at the same rate as it goes up. Unfortunately, this has not been the case for COVID-19: we come down from the peak at a much slower pace. This leads to a significant under-projection in IHME’s model, which we have thoroughly highlighted. University of Washington biology professor Dr. Carl T. Bergostrom discussed this in more detail in this highly informative series of Tweets.
Click here to see how our projections have changed over time, compared with the IHME model. For a comparison of April projections for several heavily-impacted states and countries, click here.
To conclude, we believe that a successful model must be able to quickly determine what is realistic and what is not, and the above examples highlights our main concerns with the IHME model.
Back to Top
Data and Output
To make our projections, we use the daily death total provided by Johns Hopkins CSSE, what is considered by experts to be the “gold standard” reference data. We do not use case-related data in our modeling due to reasoning alluded to here.
Every day, raw daily projections for all 50 US states and select international countries will be uploaded onto our GitHub page. We are projecting future deaths as reported by Johns Hopkins CSSE. For the US, this includes both confirmed and probable deaths.
Back to Top
US states: We assume heavy social distancing until the reopening date and moderate social distancing afterwards. We use the reopening date as outlined by the New York Times. For states with a staggered reopening, we use the date for which restaurants are allowed to reopen. For states where there is no concrete reopening date (states highlighted in yellow on the NYT map), we assume a reopening date of June 1. Reopening will likely cause a second wave of infections in states where the outbreak has not yet been fully contained.
European countries: We assume heavy social distancing until mid-May and moderate social distancing afterwards.
Non-US and Non-European countries: We try our best to keep track of when each country plans to reopen. If there is no news, we assume social distancing through August.
Heavy vs moderate social distancing
Heavy social distancing is what many states and countries enacted in the initial stages of the epidemic: stay-at-home orders, closed non-essential businesses, etc. Infection rates typically decrease ~60%, going from an R0 of around 2-3 to an R of 0.6-1.0. As long as R, a measure of how many people an infected person infects on average, is less than 1, infections will decrease over time. If R is greater than 1, then the infection curve will rise. Hence, the ultimate goal is to keep R under 1.
Moderate social distancing is what we assume will happen once states and countries gradually begin relaxing their social distancing guidelines. Some establishments will reopen, but people will still be somewhat cognizant about maintaining social distancing. Most states and countries will have guidelines that aim to maximize social distancing and minimize close contact, such as enforcing capacity limits and recommending mask-wearing. We assume that infection rates will increase 0-20%, resulting in an R of around 0.8-1.2. This is based on analysis of R values in regions where there were no lockdowns, such as Sweden and South Dakota. Note that this is still a lower infection rate than what it was prior to the outbreak for most regions.
If regions impose stricter social distancing guidelines than our assumptions listed above, then we will likely see a lower infections and death rate than the current projections. Conversely, if regions impose looser guidelines, then we will likely see a higher infections and death rate. For example, if California reopens before June 1, there will be an increased chance of an earlier resurgence. Or if a state required all residents to wear masks, the likelihood of a steep increase in infections will decrease, according to some recent studies (, , ).
In regions where the outbreak has not yet been fully contained, it is possible that reopening will cause a second wave of infections if states fail to maintain sufficient social distancing.
We assume that states with a second outbreak will take actions to reduce transmission, such as increased contact tracing, mandatory mask wearing, improved treatment, etc. In the case where the infections curve continue to rise exponentially after a reopening, it may become necessary for regions to impose additional mitigation measures, perhaps even a second lockdown. A second lockdown was seen in numerous Asian countries where a second wave occurred, including in Japan, Hong Kong, and Singapore. Our model incorporates the concept of a second lockdown, which we estimate will happen approximately 30 days after the reopening. Additional mitigation strategies are only necessary if the effective reproduction number (R) after reopening is significantly greater than 1.
The current and total infections estimates in our projections are at the core of our SEIR model. We use those estimates to make forecasts regarding future deaths according to the specifications of the SEIR model. The total infections estimate includes all individuals who have ever been infected by the virus, including asymptomatic individuals as well as those who were never tested. The current infections estimate is based on how many people are currently infected at that time point (total - recovered). To compute current infections, we assume that individuals are infected for an average of 15 days. We estimate that the true number of total infections is likely 5-15x higher than reported cases for most regions.
Infection Fatality Rate (IFR)
Jun 1 Update: Given new data, we have changed our model to use a variable IFR that decreases over time to reflect improving treatments and the lower proportion of care home deaths. We decrease the IFR linearly over the span of 3 months until it is 75% of the original IFR.
We estimate that true mortality rate (IFR) for COVID-19 in the US is between 0.9-1.2%. This matches a May 7 study that estimates the IFR to be slightly less than 1.3% after accounting for asymptomatic cases. We also found that most countries in Europe (with the the exceptions of United Kingdom, Spain, and Eastern Europe) have an IFR closer to 0.75%, which matches this May 6 study. Hence, in our projections, we use 0.75% for those European countries and 1% for all US states and other countries.
Recent global and US studies point to a 1% IFR as a reasonable estimate.
Back to Top
We want to be as clear as possible regarding what our model can and cannot do. While we try our best to make accurate projections, no model is perfect. The future is not set in stone: a single policy change or a small change in the assumptions can cause a large impact in how the epidemic progresses.
That’s why in addition to our most likely estimate, we also provide a 95% confidence interval to reflect this uncertainty. For example, if we predict 150,760 deaths with a range of 88-294k, it means that there is roughly a 95% chance that the true deaths will be between 88-294k. Note that these confidence intervals are generated given that our above assumptions hold true. There are many real-world variables that can cause our assumptions to be inaccurate and affect the true outcome. We will try our best to address any inaccurate assumptions as time goes on.
We want to caution against focusing on one particular number as the outcome of this model. We are in fact projecting a range which includes a most likely outcome. If the true results fall within the range, that is within the expected outcome of this model. We highly recommend that you include our range when referencing our projections (i.e. 21,342 (15-34k) deaths).
Data accuracy : A model is only as good as the data we feed it. If the data is not accurate, then it would be difficult to make accurate projections downstream. Recently, there have been various reports regarding the accuracy and integrity of data that some US states have been reporting (e.g. see The Atlantic and Associated Press). We express similar concerns, and hope that states will do their best to report accurate data.
Day-of-week factors: We currently do not account for day-of-week factors in death reporting. According to our analysis, deaths reported on Sunday/Monday are about 60% of that of Tuesday-Thursday. So we expect on average that our projections will be higher than Sunday/Monday reports and lower than our Tuesday-Thursday reports.
Confidence intervals: Due to the aforementioned day-of-week factors and various reporting noises (e.g. states sometimes report 0 on one day and make up for it on the next day), we recommend smoothing daily reported deaths before comparing them to our daily confidence intervals.
Data frequency: Because our model uses only the daily death totals from each region to make projections, it will be more effective for regions where there are more available deaths data (such as New York) than regions where there are only a few reported deaths (such as Wyoming).
Seasonality: We currently do not explicitly factor in seasonality changes. A May 8 study of 144 geopolitical areas finds no significant correlation between temperature and transmission. However, if seasonality effects are reflected in the data, we will implicitly factor it in. It is possible that the effects of warmer temperatures may be partially offset by lockdown fatigue.
Lockdown fatigue / holidays: As shown in various mobility data and our analysis of the NYC subway data, an increasing number of people have been moving around in the weeks following a lockdown. This may contribute to an increase in infections in the weeks following the lockdown/mitigation. Similarly, holidays may be a source of “superspreader” events, which we currently do not explicitly incorporate.
Reporting differences: Different countries follow different guidelines on how they are reporting COVID-19 deaths. For example, Belgium is one of the most comprehensive countries when it comes to death reporting: they report all probable deaths as well as nursing home deaths. In contrast, United Kingdom only began including care home deaths starting on April 29, having only reported hospital deaths previously. Because we are projecting future reported deaths, our model assumes that the reporting guidelines remains constant for each country.
Excess deaths: While we attempt to predict the official death total, the true death total will be higher due to underreporting at various levels. The New York Times and Financial Times are currently tracking these excess deaths.
End date: We are only making projections for 10-18 weeks ahead, but this does not mean that the epidemic will stop afterwards. Deaths will continue to rise even after we stop making projections. We are also currently not factoring in a fall wave, which was the most deadly wave in the 1918 Flu Pandemic.
International projections: Our model was created and optimized for the United States (and to a lesser extent, Europe). We include our projections for over 60 countries, but we want to caution that the model was not optimized for international countries. So if you plan on citing our model’s international projections, please be sure to also consult each country’s health experts first.
Affecting the future: Our projections are not set in stone and do not exist in a vacuum. If everyone saw our projections and heeded the advice of experts to continuously practice social distancing, the infections and deaths will decrease over time, leading to a final tally that is lower than our projection. That does not mean that our projection were “wrong”. In fact, our greatest hope is the scenario described above where we can help prevent future infections and deaths, causing our projections to be an overestimate. For example, a early March Imperical College study estimated that 2.2 million people would die in the US if mitigations were not implemented. This helped lead to a wave of lockdowns and stay-at-home orders, thereby significantly reducing deaths. That does not mean that the Imperial College study was “wrong” - their study helped shape the outcome of the future.
While we attempt our best to ensure accuracy and precision, no model is perfect, so we urge everyone to use caution when interpreting these projections. This is just one particular model, so we encourage everyone to evaluate and be open to multiple sources. At the end of the day, the decision-making rests in the hands of people, not machines.
Back to Top
Historical US Projections
Below, we show how our (C19Pro) August 4 projections for the US has changed over time, compared to IHME. We also show a comparison of the latest projections.
Note that for the entire month of April, IHME projected between 60,000-73,000 deaths by August, all while deaths increased by an average of 2,000 per day. All of their August projections from April were surpassed by May 6.
Also note that while we update our projections daily, IHME only updates their projections once or twice a week.
Back to Top
- CNN - May 5
- StatNews - Apr 30
- MarketWatch - May 6
- New York Times - May 12
- The Economist - May 21
- CNN - Apr 28
- TheHill - Apr 30
- The Mercury News - May 26
- The Mercury News - May 4
- Star Tribune - May 13
- New York Times - May 5
- NPR - May 7
- USA Today - May 1
- New York Post - Apr 29
- Reason - May 28
- Reason - May 1
- The Bulwark - May 21
- Bustle - May 22
- Courthouse News - May 6
- KUOW-FM - May 18
ABC15 Arizona - April 21
- Alto Nivel - May 21
Back to Top
Who We Are
covid19-projections.com is made by Youyang Gu, an independent data scientist. Youyang completed his Bachelor’s degree at the Massachusetts Institute of Technology (MIT), double majoring in Electrical Engineering & Computer Science and Mathematics. He also received his Master’s degree at MIT, completing his thesis as part of the Natural Language Processing group at the MIT Computer Science & Artificial Intelligence Laboratory. His expertise is in using machine learning to understand data and make accurate predictions. You can contact him on Twitter or by using the Contact page.
Back to Top
- Add 7 new countries (Australia, Belarus, Bolivia, Cuba, Honduras, Kuwait, UAE), 2 Canadian provinces (Alberta, British Columbia), and 20 US counties
- Increase projected end date from August 4 to September 1
- Add 2 Canadian provinces (Ontario, Quebec) and 14 US counties
- Add plots for the effective reproduction value (R_t) over time. Raw data also available on GitHub.
- Add hypothetical of how the US would fare if everyone began social distancing one week earlier or one week later.
- Add projections that assume no reopening by appending
-noreopento the projections URL (e.g. covid19-projections.com/us-noreopen)
- Add projections for 23 additional countries: Algeria, Argentina, Bangladesh, Chile, Colombia, Dominican Republic, Ecuador, Egypt, Iceland, Israel, Japan, Malaysia, Moldova, Morocco, Nigeria, Pakistan, Panama, Peru, Saudi Arabia, Serbia, South Africa, South Korea, Ukraine
- Add R0 and post-mitigation R estimates to GitHub
- Update US states reopening timelines according to the New York Times
- Add daily combined projections to Github
- Add plots for R-value estimates for every state and country to Infections Tracker page
- Forecasts added to the CDC website
- Incorporate probable deaths into projections, following updated CDC guidelines
- First projections submitted to the Centers for Disease Control and Prevention (CDC).
- Incorporate the relaxing of social distancing in June (see our Assumptions page)
- Add Norway and Russia to projections
- Add Infections Tracker page that estimates the number of infections in each US state
- Increase projected end date from June 30 to August 4
- Add plots for the number of infected individuals
- Add projections for all European Union countries and 7 additional countries: Brazil, Canada, India, Indonesia, Mexico, Philippines, Turkey
- Launch covid19-projections.com
- Add graphs for each state
- Separate global data from US data
- Add 9 international countries for projections: Belgium, France, Germany, Iran, Italy, Netherlands, Spain, Switzerland, United Kingdom
- Add lower and upper bounds to projections; also project date of peak deaths
- Incorporate international data and add projections for Italy
- Add first projections for the US and individual states
- Begin project