Skip to content

VisionXLab/RSCoVLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repo is working in progress. I'm trying to release all items as soon as possible.


RSCoVLM: Co-Training Vision Language Models for Remote Sensing Multi-task Learning

Qingyun Li*Shuran Ma*Junwei Luo*Yi Yu*Yue ZhouFengxiang WangXudong LuXiaoxing WangXin HeYushi ChenXue YangJunchi Yan

If you find our work helpful, please consider giving us a ⭐!

This repo is a technical practice of using Remote Sensing data to Collaboratively train Large Vision Language Models and hosts the official implementation of the paper: Co-Training Vision Language Models for Remote Sensing Multi-task Learning.

Abstract

fig_method

With Transformers achieving outstanding performance on individual remote sensing (RS) tasks, we are now approaching the realization of a unified model that excels across multiple tasks through multi-task learning (MTL). Compared to single-task approaches, MTL methods offer improved generalization, enhanced scalability, and greater practical applicability. Recently, vision language models (VLMs) have achieved promising results in RS image understanding, grounding, and ultra-high-resolution (UHR) image reasoning, respectively. Moreover, the unified text-based interface demonstrates significant potential for MTL. Hence, in this work, we present RSCoVLM, a simple yet flexible VLM baseline for RS MTL. Firstly, we create the data curation engine, including data acquisition, offline processing and integrating, as well as online loading and weighting. This data engine effectively addresses complex RS data enviroment and generates flexible vision-language conversations. Furthermore, we propose a unified dynamic-resolution strategy to address the diverse image scales inherent in RS imagery. For UHR images, we introduce the Zoom-in Chain mechanism together with its corresponding dataset, LRS-VQA-Zoom. The strategies are flexible and effectively mitigate the computational burdens. Additionally, we significantly enhance the model’s object detection capability and propose a novel evaluation protocol that ensures fair comparison between VLMs and conventional detection models. Extensive experiments demonstrate that RSCoVLM achieves state-of-the-art performance across diverse tasks, outperforming existing RS VLMs and even rivaling specialized expert models. All the training and evaluating tools, model weights, and datasets have been fully open-sourced to support reproducibility. We expect that this baseline will promote further progress toward general-purpose RS models.

Get Started

First, refer to Enviroment.md to prepare an enviroment.

For training rscovlm, firstly refer to Data.md to prepare/download the data.

NOTE: We support multi-nodes distributed training based on torchrun. If your resource platform is different and requires multi-nodes distributed training, you may need adapt the shell scripts to your platform. Or you can mult the node count to gradient_accumulation_steps option. Concat us in issue for more support.

TODO: add instructions of practices and interfaces

Update to the latest version

We may update the codebase, the commit log is here.

If you have installed the previous version and would like to update to the latest version, you can:

cd RSCoVLM
git pull origin main  # if you have modification, commit your changes and merge the branches
pip install -e .

Contact and Acknowledge

Feel free to contact me through my email (21b905003@stu.hit.edu.cn) or github issue. I'll continue to maintain this repo.

The code is based on Transformers and MMRotate. Many modules refer to InternVL and LLaVA. The model architecture benefits from the open-source general-purpose vision-language model Qwen-VL series. Thanks for their brilliant works.

Citation

If you find our paper or benchmark helpful for your research, please consider citing our paper and giving this repo a star ⭐. Thank you very much!

The manuscript has been submmitted at 2026.11.14, and is undergoing review.

@article{li2025rscovlm,
  title={Co-Training Vision Language Models for Remote Sensing Multi-task Learning}, 
  author={Qingyun Li and Shuran Ma and Junwei Luo and Yi Yu and Yue Zhou and Fengxiang Wang and Xudong Lu and Xiaoxing Wang and Xin He and Yushi Chen and Xue Yang and Junchi Yan},
  journal={arXiv preprint arXiv:2511.21272},
  year={2025},
}

@article{li2025lmmrotate,
  title={A Simple Aerial Detection Baseline of Multimodal Language Models},
  author={Li, Qingyun and Chen, Yushi and Shu, Xinya and Chen, Dong and He, Xin and Yu Yi and Yang, Xue},
  journal={arXiv preprint arXiv:2501.09720},
  year={2025}
}

About

[ArXiv 2025] Co-Training Vision Language Models for Remote Sensing Multi-task Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published