This repository contains the implementation of WEG-MoE from the paper "Explainable Classifier for Malignant Lymphoma Subtyping via Cell Graph and Image Fusion" published in MICCAI 2025.
WEG-MoE is a novel approach for malignant lymphoma subtyping that combines cell graph information and histopathological images through a mixture of experts architecture with weak supervision. The model leverages both cellular spatial relationships and visual features to provide explainable classifications of lymphoma subtypes (in this implementation, DLBCL, FL, and Reactive).
WEG-MoE consists of three main components:
- Cell Graph Expert (GIN): Processes cell graph data to capture spatial relationships between cells
- Image Expert (UNI2): Processes histopathological images using universal image features
- Gating Network: Learns to combine outputs from both experts based on graph features
weg_moe/: Main WEG-MoE model implementationnetwork.py: Core WEG-MoE architecture with gating mechanismtrain_ddp.py: Training script with distributed data parallel supportinference_and_evaluation.py: Inference and evaluation pipeline
-
uni_modal/GIN/: Graph Isomorphism Network (Cell Graph Expert)network.py: GIN implementation for processing cell graphstrain_ddp.py: Training script for the graph expert
-
uni_modal/UNI2/: Universal Image Encoder (Image Expert)network.py: UNI2 implementation for histopathological image processingtrain_ddp.py: Training script for the image expert_pre_extract_uni2_embedding.py: Utility for extracting UNI features
modules/: Shared utilities and dataset handlingTransAdditiveModel.py: Transformer-based additive MIL modelgraph_dataset.py: Graph dataset processing utilitiesimage_dataset.py: Image dataset utilitiesimage_graph_dataset.py: Combined image and graph dataset handling
trained_weight/: Pre-trained model weight filesWEG_MoE/model.pth: Trained WEG-MoE modelGIN/model.pth: Pre-trained GIN expert weightUNI2/model.pth: Pre-trained UNI2 expert weight
Note: All models are trained on our proprietary malignant lymphoma H&E stained whole slide image (WSI) dataset, which is not publicly available.
This implementation requires cell graph data to be prepared for each patch extracted from WSIs. The cell graphs must be implemented as PyTorch Geometric (PyG) labeled graphs with the following specifications:
-
Node features (
x): One-hot encoded cell types corresponding to each cell in the patch. In our setting, the cell types are (in order):others(index 0): Other cell typeslbc(index 1): Large B-cellscc(index 2): Centroblasts/Centrocytesrm(index 3): Reactive/Microenvironment cells
-
Edge structure: Edges represent spatial relationships between cells within each patch using a radius graph with r=60 pixels. Edge weights are not used in this implementation, but edge attributes (
edge_attr) contain the Euclidean distances between connected cells. -
Graph format: Each patch should correspond to one PyG
Dataobject containing the cell graph for that spatial region of the WSI. The expected data structure includes:data.x: Node features (cell types as one-hot vectors)data.edge_index: Edge connectivitydata.edge_attr: Edge distances (optional, used for edge weight transformation)data.pos: Cell positions within the patchdata.instance_coord: Patch coordinates within the WSI- Additional metadata:
data.y,data.cell_id,data.bbox
The cell graphs should be pre-processed and saved as .pt files that can be loaded by the dataset classes in the
modules/ directory. The directory structure should organize files by subtype (DLBCL, FL, Reactive) with filenames
following the pattern {slide_id}_{coordinates}.pt.
cd weg_moe
python train_ddp.py --seed 0 --use_num_gpus 8# Train GIN (Cell Graph Expert)
cd uni_modal/GIN
python train_ddp.py --seed 0 --use_num_gpus 8
# Train UNI2 (Image Expert)
cd uni_modal/UNI2
python train_ddp.py --seed 0 --use_num_gpus 8cd weg_moe
# Run both inference and evaluation
python inference_and_evaluation.py --mode both --seeds 0 1 2 3 4
# Run inference only
python inference_and_evaluation.py --mode inference --seeds 0 1 2 3 4
# Run evaluation only (requires existing inference results)
python inference_and_evaluation.py --mode evaluate --result_dir instance_wise_logits- Python 3.8+
- PyTorch 1.9+
- PyTorch Geometric
- scikit-learn
- OpenSlide
- NumPy
If you use this code in your research, please cite:
@InProceedings{NisDai_Explainable_MICCAI2025,
author = { Nishiyama, Daiki AND Miyoshi, Hiroaki AND Hashimoto, Noriaki AND Ohshima, Koichi AND Hontani, Hidekata AND Takeuchi, Ichiro AND Sakuma, Jun},
title = { { Explainable Classifier for Malignant Lymphoma Subtyping via Cell Graph and Image Fusion } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15971},
month = {September},
page = {320 -- 330}
}This project is licensed under the MIT License - see the LICENSE file for details.