This repo is for exploring forecasting methods and tools for both COVID and Flu. The repo is structured as a targets project, which means that it is easy to run things in parallel and to cache results. The repo is also structured as an R package, which means that it is easy to share code between different targets.
The pipeline should run on a schedule and by ~10:45AM PST time, you should find the new reports on https://delphi-forecasting-reports.netlify.app/. If not, see the instructions below for manual running.
Define run parameters in your .Renviron
file:
EPIDATR_USE_CACHE=true
# Choose a cache timeout for yourself. We want a long cache time, since we work with historical data.
EPIDATR_CACHE_MAX_AGE_DAYS=42
DEBUG_MODE=false
DUMMY_MODE=false
USE_SHINY=false
TAR_PROJECT=covid_hosp_explore
FLU_SUBMISSION_DIRECTORY=cache
COVID_SUBMISSION_DIRECTORY=cache
EXTERNAL_SCORES_PATH=legacy-exploration-scorecards.qs
AWS_S3_PREFIX=exploration
AUX_DATA_PATH=aux_data
EPIDATR_USE_CACHE
controls whetherepidatr
functions use the cache.DEBUG_MODE
controls whethertargets::tar_make
is run with thecallr_function=NULL
, which allows for debugging. This only works if parallelization has been turned off inscripts/targets-common.R
by setting the default controller to serial on line 51.DUMMY_MODE
controls whether all forecasters are replaced with a dummy. This is useful for testing a new pipeline.USE_SHINY
controls whether we start a Shiny server after producing the targets.TAR_PROJECT
controls whichtargets
project is run byrun.R
. Likely eithercovid_hosp_explore
orflu_hosp_explore
SUBMISSION_DIRECTORY
the path to where production forecasts should be saved to be submitted to the hub.EXTERNAL_SCORES_PATH
controls where external scores are loaded from. If not set, external scores are not used.AWS_S3_PREFIX
controls the prefix to use in the AWS S3 bucket (a prefix is a pseudo-directory in a bucket).AUX_DATA_PATH=aux_data
Run the pipeline using:
# Install renv and R dependencies
make install
# Pull pre-scored forecasts from the AWS bucket
make pull
# Make forecasts
make prod-flu
make prod-covid
# The job output can be found in nohup.out
# If there are errors, you can view them with the following command (replace with appropriate project)
Sys.setenv(TAR_PROJECT = "covid_hosp_prod");
targets::tar_meta(fields = error, complete_only = FALSE)
# Automatically append the new reports to the site index and host the site on netlify
# (this requires the netlify CLI to be installed and configured, talk to Dmitry about this)
make update_site && make netlify
# Update weights until satisfied using *_geo_exclusions.csv, rerun the make command above
# Submit (makes a commit, pushes to our fork, and makes a PR; this requires a GitHub token
# and the gh CLI to be installed and configured, talk to Dmitry about this)
make submit-flu
make submit-covid
run.R
andMakefile
: the main entrypoint for all pipelinesR/
: R package code to be reusedscripts/
: plotting, code, and misc.tests/
: package testscovid_hosp_explore/
andscripts/covid_hosp_explore.R
: atargets
project for exploring covid hospitalization forecastersflu_hosp_explore/
andscripts/flu_hosp_explore.R
: atargets
project for exploring flu hospitalization forecasterscovid_hosp_prod/
andscripts/covid_hosp_prod.R
: atargets
project for predicting covid hospitalizationsflu_hosp_prod/
andscripts/flu_hosp_prod.R
: atargets
project for predicting flu hospitalizationsforecaster_testing/
andscripts/forecaster_testing.R
: atargets
project for testing forecasters
Insert a browser()
statements into the code you want to debug and then run
# use_crew=FALSE disables parallelization
tar_make(target_name, callr_function = NULL, use_crew = FALSE)
See this diagram. Double diamond objects represent "plates" (to evoke plate notation, but don't take the comparison too literally), which are used to represent multiple objects of the same type (e.g. different forecasters).
The basic forecaster takes in an epi_df, does some pre-processing, does an epipredict workflow, and then some post-processing
This kind of forecaster has two components: a list of existing forecasters it depends on, and a function that aggregates those forecasters.
Any forecaster which requires a pre-trained component. An example is a forecaster with a sophisticated imputation method. Evaluating these has some thorns around training/testing splitting. It may be foldable into the basic variety though.