The repo contains the code for the paper Patient-Adaptive Echocardiography using Cognitive Ultrasound. For more information, please refer to the project page.
Find the weights of our model on Huggingface.
cp .env.example .env # edit!
cp users.yaml.example users.yaml # edit!Install this repository in editable mode:
pip install -e .Install zea, the cognitive ultrasound toolbox, preferably
through the the submodule in this repo:
git submodule update --init --recursive
pip install -e zeaInstall other dependencies for this repo:
KERAS_VER=$(python3 -c "import keras; print(keras.__version__)")
pip install tf2jax==0.3.6 pandas jaxwt jax
pip install keras==${KERAS_VER}Alternatively, we have provided a Dockerfile to build a Docker image with all dependencies installed.
-
Download the EchoNet-Dynamic dataset.
-
[Optionally] Download the train / validation / test split we used for the EchoNet-Dynamic dataset.
-
Convert the dataset to the polar format:
python -m zea.data.convert echonet /path/to/echonet-dynamic /path/to/echonet-dynamic-polar --split_path /path/to/split.yaml
To train the video diffusion model, use the models/train_diffusion.py script. The time conditional U-Net architecture implemented by zea is used for the denoiser. You can modify architectural and training hyperparameters in the config configs/training/echonet_diffusion_3_frames.yaml.
python models/train_diffusion.pyThe main file to use for inference is active_sampling_temporal.py in combination with a config file.
python active_sampling_temporal.py --config "configs/echonet_3_frames.yaml"For the 3D model, use active_sampling_temporal_3d.py.
python active_sampling_temporal_3d.py --config "configs/elevation_3d.yaml"For educational purposes, we have also created a simplified version of our algorithm in this notebook.
In the benchmarking_scripts/ folder, we have provided scripts to reproduce the results from the paper.
These scripts will save data to a folder, which can be visualized using the scripts in the plotting/ folder.
