This repo contains automatically updated evaluations for all models participating in the FluSight forecast hub from the 2021-2022 season through the present. Evaluations will always use the most recent versions/revisions of surveillance and forecast numbers.
Up-to-date evaluations are available in /evaluations. This directory is load-bearing - it is used for visualizations in epistorm-dashboard hosted at https://fluforecast.epistorm.org/.
DO NOT MAKE CHANGES TO THE /evaluations DIRECTORY WHICH YOU DO NOT WANT REFLECTED DOWNSTREAM
Predictions for the 2021-2022 and 2022-2023 seasons come from a separate archive. The archive is scored separately by the archive_evaluations workflow, which saves scores to /evaluations/archive-2021-2023 and inserts them into the main evaluations files in /evaluations.
The scratch_evalations workflow enables you to manually initiate evaluations with the following inputs:
Models: either all or specify any number of models by name, space-separated in a single string without quotes (defaults to MOBS-GLEAM_FLUH).
Dates: either all or specify any number of dates in YYYY-MM-DD format, space-separated in a single string without quotes (defaults to all).
This workflow outputs evaluations for the specified models and dates to the /scratch directory, overwriting existing files. Run the workflow via the GUI in the GitHub Actions tab.
This workflow does not score archived seasons (2021-2023). It uses the most up-to-date data from FluSight forecast hub at time of initiation.
The update_evaluations workflow runs on a schedule to provide updated evaluations in the /evaluations directory. You can manually initiate an out-of-schedule update via the GitHub Actions tab.
The update schedule is:
- Every Thursday, starting from 6:00 AM UTC until 9:00 PM UTC, every 3 hours
- Every Wednesday, Friday, and Saturday at 9:00 PM UTC
Install conda environment from .yml file
conda env create -f conda_requirements.yml
and activate the environment
conda activate epistorm-evaluations
Open epistorm_evaluations.ipynb with your preferred editor, e.g. jupyter lab.
Alternatively, run python epistorm_evaluations.py --mode scratch --models ... --dates ..., replacing ellipses with desired inputs.
epistorm_evaluations.ipynbis a working evaluations notebook.epistorm_evaluations.pyis the automated evaluations script..github/workflows/update_evaluations.ymlrunsepistorm_evaluations.pyin update mode on a schedule or on manual initiation, usingdata_retrieval.shto track updates, and uploads the new evaluations to/evaluations. This workflow is responsible for maintaining up-to-date evaluations..github/workflows/scratch_evaluations.ymlrunsepistorm_evaluations.pyin scratch mode for the specified models and dates. Results are uploaded to/scratchand overwrite existing files in this directory..github/workflows/archive-evaluations.ymlrunsepistorm_evaluations.pyon 2021-2023 archive data. Results are uploaded to/evaluations/archive-2021-2023as well as inserted into the main evaluations files in/evaluations./evaluationscontains up-to-date evaluations of all models./evaluations/archive-2021-2023contains evaluations for archive seasons./scratchcontains scratch evaluations output./Flusight-forecast-hubis the submodule repo containing all data./datacontains copied data for use in automated evaluations with update tracking./deprecatedcontains old versions of files.data_retrieval.shcopies and tracks updated data for use in update workflow.updated_forecasts.csvcontains paths to forecast files which have been updated since the last evaluations run, as recorded bydata_retrieval.sh.conda_requirements.ymlfor running locally with a conda environment.pip_requirements.txtfor running on GitHub Actions (or locally) with pip.