A segmentation pipeline for the GlaS MICCAI 2015 Gland Segmentation dataset, featuring a UNet baseline and an extendable interface for transformer-based architectures such as Swin-UNet. The project includes training, evaluation, visualization utilities, and TensorBoard logging.
Hardware Note: Experiments were run on a single NVIDIA H800 80 GB GPU, but the code should run on any CUDA-capable GPU with appropriate batch size adjustments.
cd path-seg-model
conda create -n pathseg python=3.10
conda activate pathseg
pip install -r requirements.txtpath-seg-model/
├── main.py # Entry point for training / evaluation
├── train.py # Training loop logic
├── eval.py # Evaluation / inference script
├── requirements.txt # Python dependencies
├── data/
│ ├── __init__.py
│ └── dataset.py # Custom Dataset class for GlaS
├── glas/
│ └── Warwick_QU_Dataset/ # Root directory for GlaS data
├── models/
│ ├── __init__.py
│ ├── unet.py # UNet implementation
│ └── swinunet.py # Swin-UNet implementation
├── checkpoints/
│ ├── unet_checkpoint.pth
│ └── swinunet_checkpoint.pth
├── utils/
│ ├── __init__.py
│ ├── metrics.py # Dice, IoU, etc.
│ └── viz.py # Visualization utilities
└── logs/ # TensorBoard logs (event files)GlaS MICCAI 2015 Dataset:
https://www.kaggle.com/datasets/sani84/glasmiccai2015-gland-segmentation
Place the extracted dataset in:
glas/Warwick_QU_Dataset/
Expected structure:
glas/
└── Warwick_QU_Dataset/
├── Train/
│ ├── images/*.bmp
│ └── masks/*_anno.bmp
└── Test/
├── images/*.bmp
└── masks/*_anno.bmppython main.pyOptional example:
python main.py --model unet --epochs 100 --batch-size 4python eval.py --model unet --checkpoint checkpoints/unet_checkpoint.pthtensorboard --logdir logsOpen your browser at:
http://localhost:6006
- UNet ---
models/unet.py\ - Swin-UNet ---
models/swinunet.py
Architectures can be extended by adding new files under models/ and
integrating them in main.py.
Currently implemented or planned:
- Dice Coefficient
- *Intersection over Union (IoU)
See utils/metrics.py for implementation details.
If you encounter any issues, please open a GitHub issue or email: