Updated on August 3, 2025. This pipeline was enhanced in the paper published at NeuroImage.
This is the pipeline which proposed in the paper published at Fluids and Barriers of the CNS. In this repository, we offer two methods to execute the ChP segmentation pipeline. One method involves directly executing the Python code, while the other method utilizes Docker.

- The following input forms are accepeted:
- Single NIfTI file (
.niior.nii.gz) - folder containing
.dcmseries - folder containing multiple NIfTI files
.txtfile where each row contains the NIfTI file path
Default path: results/
ChP segmentation is saved in results/cp/3_orig_T1_space/.
You can specify a custom output path.
For detailed intermediate results, jump to Output files structure.
docker pull batjoker1/chp-seg:v2- CPU Version:
docker run -v $OUTPUT_FOLDER_ON_HOST:/app/results -v $DATA_PATH_ON_HOST:$DATA_PATH_IN_CONTAINER -it --rm batjoker1/chp-seg:v2 bash- GPU Version (requires NVIDIA driver support):
docker run --gpus all -v $OUTPUT_FOLDER_ON_HOST:/app/results -v $DATA_PATH_ON_HOST:$DATA_PATH_IN_CONTAINER -it --rm batjoker1/chp-seg:v2 bashOnce inside the container, run:
python pipeline.py --input File/DirectoryEnsure that TensorFlow >2.4 is installed. Then, install the required packages listed in requirements.txt (no strict version requirements) and Deepbrain (pip install deepbrain --no-deps).
Clone the repository and navigate to the project directory:
git clone https://github.com/princeleeee/ChP-Seg.git
cd ChP-SegModify the DeepBrain package to enable TensorFlow 1.x compatibility. This is necessary because DeepBrain was originally implemented with TensorFlow 1.x. See the related issue for more details.
For example, in my case I run the following command to enable TensorFlow 1.x compatibility:
sed -i '1s/.*/import tensorflow.compat.v1 as tf/' /usr/local/lib/python3.8/site-packages/deepbrain/extractor.py # Necessary since Deepbrain is accomplished with Tensorlow 1.xYou can change in the similar way according to your own environment.
Download the pre-trained deep learning model weights from Google Drive and place them in the weights folder:
| Version | Filename | Description |
|---|---|---|
| v1 | All_data_trainweights.184-0.96731.h5 |
ChP model weights |
All_data_trainweights.200-0.05769.h5 |
LVEN model weights | |
| v2 | 20241210-220442_all_data_trainbest_weights.h5 |
ChP model weights (updated) |
All_data_trainweights.200-0.05769.h5 |
LVEN model weights (unchanged) |
mkdir weights
Once everything is set up, you can run the pipeline on a sample input:
python pipeline.py --input demo/I812923.nii.gz # Example run
Or with updated model saved in 'weights/' folder
python pipeline.py --input demo/I812923.nii.gz --ven_weights weights/All_data_trainweights.200-0.05769.h5 --cp_weights weights/20241210-220442_all_data_trainbest_weights.h5results/cp/3_orig_T1_space is the path of the ChP segmentation results for input files.
results/
├── file_collections.txt # all files input to the pipeline.
│
├── brain/ # save the results in the preprocessing and skull stripping stage.
│ │
│ ├── 0_resample/ # 1mm^3 RAS+ reorientation and resampled images save path.
│ │ ├── crop_range.txt
│ │ └── xxx.nii.gz
│ │
│ ├── 1_check/ # 3 plane .png check the crop option on 0_resample
│ ├── 1_err/ # 3 plane .png check the crop option on 0_resample
│ ├── 1_img/ # cropped skull stripped images from 0_resample, size: 160*200*160, crop range records is brain/0_resample/crop_range.txt
│ ├── 1_mask/ # brain mask on 1_img
│ │
│ └── 2_resample_inverse/ # restore images in 1_img to original image space.
│
├── ventricle/ # ventricle segmentation results folder
│ │
│ ├── 0_mask/ # segmentations resluts of lateral ventricles, size: 160*200*160
│ │
│ ├── 1_img_crop/ # crop brain/1_img to size 96*96*80
│ ├── 1_mask_crop/ # crop ventricle/0_mask to size 96*96*80
│ │
│ ├── 2_resampledT1_space/ # restore ventricle segmentation mask that matches brain/0_resample
│ │
│ ├── 3_orig_T1_space/ # restore ventricle segmentation mask that matches brain/2_resample_inverse
│ │
│ └── crop_range.txt # crop range records that generated ventricle/1_img_crop and ventricle/1_mask_crop
│
└── cp/ # choroid plexus segmentation resluts folder
│
├── 0_mask/ # segmentation results of choroid plexus, size 96*96*80
│
├── 1_mask_refine/ # refined segmentaiton results of cp/0_mask
│
├── 2_resampledT1_space/ # ChP segmentation match images in brain/0_resample
│
└── 3_orig_T1_space/ # ChP segmentation match images in brain/2_resample_inverse
If you find our work helpful, please consider citing:
@article{li2024associations,
title={Associations between the choroid plexus and tau in Alzheimer’s disease using an active learning segmentation pipeline},
author={Li, Jiaxin and Hu, Yueqin and Xu, Yunzhi and Feng, Xue and Meyer, Craig H and Dai, Weiying and Zhao, Li and Alzheimer’s Disease Neuroimaging Initiative},
journal={Fluids and Barriers of the CNS},
volume={21},
number={1},
pages={56},
year={2024},
publisher={Springer}
}
@article{LI2025121392,
title = {Morphological changes of the choroid plexus in the lateral ventricle across the lifespan: 5551 subjects from fetus to elderly},
journal = {NeuroImage},
volume = {318},
pages = {121392},
year = {2025},
issn = {1053-8119},
doi = {https://doi.org/10.1016/j.neuroimage.2025.121392},
url = {https://www.sciencedirect.com/science/article/pii/S1053811925003957},
author = {Jiaxin Li and Yuxuan Gao and Yunzhi Xu and Weiying Dai and Yueqin Hu and Xue Feng and Dan Wu and Li Zhao}
}