HarDNet-DFUS: Enhancing Backbone and Decoder of HarDNet-MSEG for Diabetic Foot Ulcer Image Segmentation
Official PyTorch implementation of HarDNet-DFUS, contains the prediction codes for our submission to the Diabetic Foot Ulcer Segmentation Challenge 2022 (DFUC2022) at MICCAI2022.
inference on V100
For Image Classification : HarDNet 78.0 top-1 acc. / 1029.76 Throughput on ImageNet-1K @224x224
For Object Detection : CenterNet-HarDNet 44.3 mAP / 60 FPS on COCO val @512x512
For Semantic Segmentation : FC-HarDNet 75.9% mIoU / 90.24 FPS on Cityscapes test @1024x2048
For Polyp Segmentation : HarDNet-MSEG 90.4% mDice / 119 FPS on Kvasir-SEG @352x352
We improve HarDNet-MSEG, enhancing its backbone and decoder for DFUC.
| Method | DFUC Val. Stage mDice |
DFUC Val. Stage mIoU |
DFUC Testing Stage mDice |
DFUC Testing Stage mIoU |
|---|---|---|---|---|
| HarDNet-MSEG | 65.53 | 55.22 | n/a | n/a |
| HarDNet-DFUS | 70.63 | 60.49 | 72.87 | 62.52 |
Sample Inference and Visualized Results of FUSeg Challenge Dataset
(Due to the non-disclosure agreement of DFUC2022 dataset, we use another dataset to visualize the results)
conda create -n dfuc python=3.6
conda activate dfuc
pip install -r requirements.txt
- Download weights and place in the folder
/weights - Run:
python train.py --rect --augmentation --data_path /path/to/training/data Optional Args: --rect Padding image to square before resize to keep its aspect ratio --augmentation Activating data audmentation during training --kfold Specifying the number of K-Fold Cross-Validation --k Training the specific fold of K-Fold Cross-Validation --dataratio Specifying the ratio of data for training --seed Reproducing the result of data spliting in dataloader --data_path Path to training data
Run:
python test.py --rect --weight path/to/weight/or/folder --data_path path/to/testing/data
Optional Args:
--rect Padding image to square before resize to keep its aspect ratio
--tta Test time augmentation, 'v/h/vh' for verti/horiz/verti&horiz flip
--weight It can be a weight or a fold. If it's a folder, the result is the mean of each weight result
--data_path Path to testing data
--save_path Path to save prediction mask
Run:
python evaluate.py --image_root path/to/image/folder --gt_root path/to/ground truth/folder
Optional Args:
--image_root Path to predict result data
--gt_root Path to ground truth data
- Download the weights for HarDNet-DFUS and place them in the same folder, specifying the folder in --weight when testing. (Please ensure there is no other weight in the folder to obtain the same result.)
- Run HarDNet-DFUS with 5-fold cross validation and TTA vhflip :
python test.py --rect --modelname lawinloss4 --weight /path/to/HarDNet-DFUS_weight/folder --data_path /path/to/testing/data --tta vh
- This research is supported in part by a grant from the Ministry of Science and Technology (MOST) of Taiwan.
We thank National Center for High-performance Computing (NCHC) for providing computational and storage resources.

