This is an official PyTorch implementation for our LCAFNet. Paper can be download in LCAFNet
Create a conda virtual environment and activate it.
- conda create --name MOD python=3.9
- conda activate MOD
- pip install -r requirements.txt
Download these datasets and create a dataset folder to hold them.
Download our LCAFNet weights and create a weights folder to hold them.
- FLIR dataset: LCAFNet_FLIR.pt
- LLVIP dataset: LCAFNet_LLVIP.pt
- M3FD dataset: LCAFNet_M3FD.pt
- MFAD dataset: LCAFNet_MFAD.pt
Dataset path, GPU, batch size, etc., need to be modified according to different situations.
python train.py
python test.py
If you find LCAFNet helpful for your research, please consider citing our work.
@article{Wu2026,
author = {Wencong Wu and
Hongxi Zhang and
Xiuwei Zhang and
Hanlin Yin and
Yanning Zhang},
title = {Lightweight modal-guided cross-attention fusion network for visible-infrared object detection},
journal = {Pattern Recognition},
volume = {177},
pages = {113350},
year = {2026}
}