| Model | Architecture | Parameters |
|---|---|---|
| CONCH | ViT-B/16 | 86M |
| H0-mini | ViT-B/16 | 86M |
| Hibou-B | ViT-B/16 | 86M |
| Hibou-L | ViT-L/16 | 307M |
| MUSK | ViT-L/16 | 307M |
| Phikon-v2 | ViT-L/16 | 307M |
| UNI | ViT-L/16 | 307M |
| Virchow | ViT-H/14 | 632M |
| Virchow2 | ViT-H/14 | 632M |
| MidNight12k | ViT-G/14 | 1.1B |
| UNI2 | ViT-G/14 | 1.1B |
| Prov-GigaPath | ViT-G/14 | 1.1B |
| H-optimus-0 | ViT-G/14 | 1.1B |
| H-optimus-1 | ViT-G/14 | 1.1B |
| Kaiko | Various | 86M - 307M |
| Model | Architecture | Parameters |
|---|---|---|
| TITAN | Transformer | 49M |
| Prov-GigaPath | Transformer (LongNet) | 87M |
| PRISM | Perceiver Resampler | 99M |
System requirements: Linux-based OS (e.g., Ubuntu 22.04) with Python 3.10+ and Docker installed.
We recommend running the script inside a container using the latest slide2vec image from Docker Hub:
docker pull waticlems/slide2vec:latest
docker run --rm -it \
-v /path/to/your/data:/data \
-e HF_TOKEN=<your-huggingface-api-token> \
waticlems/slide2vec:latestReplace /path/to/your/data with your local data directory.
Alternatively, you can install slide2vec via pip:
pip install slide2vechel-
Create a
.csvfile with slide paths. Optionally, you can provide paths to pre-computed tissue masks.wsi_path,mask_path /path/to/slide1.tif,/path/to/mask1.tif /path/to/slide2.tif,/path/to/mask2.tif ...
-
Create a configuration file
A good starting point are the default configuration files where parameters are documented:
- for preprocessing options:
slide2vec/configs/default_tiling.yaml - for model options:
slide2vec/configs/default_model_.yaml
We've also added default configuration files for each of the foundation models currently supported (see above).
- for preprocessing options:
-
Kick off distributed feature extraction
python3 -m slide2vec.main --config-file </path/to/config.yaml>