Official PyTorch implementation of the paper
"Cascaded Diffusion Model and Segment Anything Model for Medical Image Synthesis via Uncertainty-Guided Prompt Generation and Multi-Level Prompt Interaction"
(IPMI 2025)
DM-SAM is a novel framework that integrates the Diffusion Model (DM) and the Segment Anything Model (SAM) for medical image synthesis.
The key idea is to leverage the uncertainty of diffusion model outputs as prompts to guide SAM-based image synthesis refinement.

Figure: Overview of the proposed DM-SAM framework.
conda create -n dmsam python=3.10 -y
conda activate dmsampip install -r requirements.txtExperiments in this paper were conducted on three publicly available datasets:
- BraSyn 2023 — 1,470 MRI scans of brain tumor patients for multi-contrast MRI synthesis.
- SynthRAD 2023 — 120 paired T1CE–CT brain scans for MRI-to-CT synthesis.
- SynthRAD 2025 — 258 paired CBCT–CT scans for thoracic cancer radiotherapy planning.
python preprocess/BraSyn.pypython DM_train.pypython DM_inference.pypython SAM_train.pypython SAM_test.pyIf you find this repository useful, please cite:
@inproceedings{pang2025cascaded,
title={Cascaded Diffusion Model and Segment Anything Model for Medical Image Synthesis via Uncertainty-Guided Prompt Generation},
author={Pang, Haowen and Hong, Xiaoming and Zhang, Peng and Ye, Chuyang},
booktitle={International Conference on Information Processing in Medical Imaging},
pages={203--217},
year={2025},
organization={Springer}
}