top of page
Search
  • Writer's picturePredii

Becoming part of the solution: the XPRIZE Pandemic Response Challenge

Updated: Mar 25, 2021

The XPRIZE Pandemic Response Challenge is a $500K, four-month challenge that focuses on the development of data-driven AI systems to predict COVID-19 infection rates and prescribe Intervention Plans (IPs) that regional governments, communities, and organizations can implement to minimize harm when reopening their economies. Learn more.

Introduction

Our team found innovative ways to extend the baseline LSTM model and utilize the simple Keras APIs to build on top of the model for better performance and better generalization between countries.

Our model submitted to Phase One of the Pandemic Response Challenge uses the example LSTM model provided in GitHub as a baseline. We determined that LSTM models perform better on predicting time-series data because of the model’s ability to learn long term sequences of observations. We wanted to utilize as much of the data preprocessing and model setup provided as possible to allow us more time to experiment with different LSTM models and different configurations for the optimal models.

Our Predictor Model To build on top of the baseline model, we determined creative ways to select the countries to use for training. We tried several sorting methods to determine which countries to use: countries with a high count of new cases, countries with the most cases, countries with the most deaths, and even created a similar ratio described above for deaths rather than new cases. We eventually went with using the top 30 countries based on a higher death ratio to use for training. We also introduced a dropout layer of 20% to both the context and action LSTMs to reduce overfitting in our models. Adding a dropout layer reduced sudden drops in cases we were seeing and allowed the number of new cases to drop at rates similar to the actual rates seen in our testing data.

To further deal with the sudden drop in cases after a week of predictions, we inspected the NPI action columns. Upon inspection we discovered the ordinal scale which represented the level of strictness of the policy was not uniformly distributed. Due to this non-uniform distribution, some of the ordinal values appeared to be outliers in the data. To deal with the ordinal outliers, we computed the Z-score on all NPI action columns. This helped us to deal with action outliers and squish the ordinal values between -2 to 2. Training on the normalized actions got the model closer to the ground truth value.

Another small change that we implemented was training from the month of March rather than from the beginning of the dataset in January. This is because we noticed that most countries were not reporting COVID-19 cases before March as not many cases were detected before then. The month of March was significant as most countries began lockdowns and began implementing NPIs.

We tried using a variety of data sources, but we found it difficult to branch out of using the Oxford dataset as we could not generalize other datasets to the regions needed for prediction. Datasets we tried include the Google Mobility dataset and the Microsoft Azure holidays dataset. The motivation behind using Google Mobility dataset was to correlate movement trends over time across geographical categories (such as parks, transits, workplace, residential, etc.) with confirmed case trends in respective geographical regions. The main issue we encountered with Google’s mobility dataset was that the geographical aspect mobility dataset was much more granular than Oxford’s dataset and we faced issues while merging the mobility trend with the confirmed cases based solely on country name and region. Merging the two datasets resulted in a loss of mobility data due to the different granularities of regions from both the Oxford and Google Mobility dataset. We also attempted to incorporate holidays using the dataset listed above to see if spikes were caused by any holidays in any of the provided countries.

We also noticed an error in our dataset as we were submitting our predictor. The populations for some regions needed for predictions were missing in the dataset we were using so we were unable to generate predictions for some of the regions. If we had noticed that the populations of some countries were missing and filled in those populations, we believe that the performance of our model would have drastically improved.

Other methods we tried that were not mentioned above included adding a convolutional layer to the LSTM model, different linear algorithms, attempting to use Keras’ LSTM layer’s stateful method to keep track of initialized weights, and training on a variety of countries. However, these methods did not improve performance for the model. We stuck with the methods described above as well as the countries selected for training that performed the best for us.

Generality & Consistency When testing our models, we used November and December data as a test dataset for accuracy, but also allowed our models to predict into the future to understand long-term performance of our models. Our model performs well across a variety of countries and can be generalized due to the inputs used in the model - number of cases for the last 21 days and the NPIs for the last 21 days.

We believe that our model does a good job predicting short-term and long-term trends for most regions. We believe that our model predicts another spike in cases in Summer 2021 due to an issue with our data described below, however it could very well be possible that some countries see spikes in cases in Summer 2021 if their NPIs stay the same. Overall, our model is not predicting an upward spike only and is attempting to predict moving trends, which is promising.

Speed and Resource Use Our model is extremely lightweight due to the limited number of layers in the model. On a 2018 Macbook Pro, the model trains in 15 minutes with 30 countries and 8 months of data, and was able to run predictions on the sandbox in 20-30 minutes, way below the hour limit.

Specialty Regions We did not add the isSpecialty column into our predictions due to an oversight in the guidelines, but our specialty region is North America. We were able to obtain the populations of each of the countries in North America, resulting in increased accuracy in this region. We believe that we have been able to accurately predict new cases due to the fact that our models have been optimized for these countries and were used for training and testing. The short-term and long-term trends seem to match what experts are saying about future cases in this region given current NPIs. We see a downward trend in North America with our model past January 2021.

Summary and Conclusion The LSTM model creates a ratio described in the Cognizant paper for predictions. This ratio is determined based on the number of new cases and the population.

Out of the remaining population of a country that hasn’t been infected, we try to predict the ratio of new cases. We decided to use the same prediction ratio as the one described in the Cognizant paper because of how it utilizes the population data in a clever way. By predicting the number of new cases over the number of people that haven’t been infected yet, we are predicting the true infection rate in each country. The model consists of two LSTM layers: one for predicting the growth rate of the disease (context) and one for predicting the effects of NPIs (action). The inputs for each are the cases ratio described above and the NPIs respectively. The modifications we have added to the model are described in the next section as part of our innovative methods.

The scope of our research and the challenge is to built a model closest to ground truth in predicting the new cases based on current NPIs and other dependencies. Extending these models to also prescribe NPIs that are appropriate for the country and territory will be very helpful to the healthcare professionals around the world as we try to implement vaccination rollout plans.

We are pleased to participate in this important challenge and looking forward to continuing our work into next phase of building the prescriptor model. We are excited to be a part of solving one of modern history’s most difficult challenges.

29 views0 comments

Recent Posts

See All
bottom of page