Package to compute contrast information from a CT image, part of the BOA. The package uses the open-source software TotalSegmentator to compute segmentations of important anatomical landmarks, which are then used to create features for a machine learning model to predict the contrast information.
If you use this package, please cite the following paper:
Baldini G, Hosch R, Schmidt CS, et al. Addressing the Contrast Media Recognition
Challenge: A Fully Automated Machine Learning Approach for Predicting Contrast
Phases in CT Imaging. Invest Radiol. Published online March 4, 2024. doi:10.1097/RLI.0000000000001071
With pip:
pip install boa-contrastWith uv:
uv add boa-contrastThese commands install only the basic package without TotalSegmentator. If you also want to install TotalSegmentator as an optional dependency, use:
pip install "boa-contrast[totalsegmentator]"uv add "boa-contrast[totalsegmentator]"Note
The current optional dependency targets totalsegmentator==1.5.7. Support for
TotalSegmentator 2.12.0 will be available soon.
However, the TotalSegmentator can also be used together with docker, and in such case it is not needed to install it.
Warning
The TotalSegmentator Docker image for version 1.5.7 is no longer available.
As a result, the Docker-based TotalSegmentator functionality currently does
not work with that version.
contrast-recognition --helpOnce a CT and a folder where to store the TotalSegmentator segmentations is given, you can run it using the following command
contrast-recognition [-h] \
--ct-path CT_PATH \
--segmentation-folder SEGMENTATION_FOLDER \
[--docker] \
[--user-id USER_ID] \
[--device-id DEVICE_ID] \
[-v]You can run it using docker by using the --docker flag. If you are using
docker, you need to specify your user ID using the --user-id flag, otherwise
you will have to change the ownership of the segmentations afterwards.
Warning
The Docker-based TotalSegmentator workflow is currently affected by the
missing 1.5.7 Docker image, so this option will not work until the package
is updated to the newer TotalSegmentator release.
If you are using a GPU, you can specify the device ID using the --device-id
flag.
You can enable verbosity with the -v flag.
To not download the TotalSegmentator weights all the time, you can specify their
location using the TOTALSEG_WEIGHTS_PATH environment variable.
A sample output looks as follows:
IV Phase: NON_CONTRAST
Contrast in GIT: NO_CONTRAST_IN_GI_TRACT
If you want to compute the segmentations first, call compute_segmentation:
from boa_contrast import compute_segmentation
compute_segmentation(
ct_path=..., # The path to the CT
segmentation_folder=..., # The root where the segmentation should be stored
device_id=..., # The ID of the GPU device or -1
user_id=..., # Your user ID for docker to run in user mode
compute_with_docker=False, # Whether to use docker or not
)Once the segmentation is available, call predict:
from boa_contrast import predict
result_dict = predict(
ct_path=..., # path to the CT
segmentation_folder=..., # path to this CT's segmentation
)Output:
{
"phase_ensemble_prediction": 0,
"phase_ensemble_predicted_class": "NON_CONTRAST",
"phase_ensemble_probas": array(
[
9.89733540e-01,
3.60637282e-04,
4.79974664e-04,
5.55973168e-04,
8.86987492e-03,
]
),
"git_ensemble_prediction": 0,
"git_ensemble_predicted_class": "NO_CONTRAST_IN_GI_TRACT",
"git_ensemble_probas": array(
[
9.99951577e-01,
4.84187825e-05,
]
),
}To serialize the result to JSON, use boa_contrast.default:
import json
from boa_contrast import default
with open("prediction.json", "w", encoding="utf-8") as f:
json.dump(result_dict, f, indent=2, default=default)