File size: 4,169 Bytes
a36eb5c 926862b a36eb5c 33c59a0 1718db1 a36eb5c 926862b 1718db1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
license: mit
datasets:
- Qingyun/lmmrotate-sft-data
language:
- en
base_model:
- microsoft/Florence-2-large
pipeline_tag: image-text-to-text
tags:
- aerial
- geoscience
- remotesensing
---
<p align="center">
<h1 align="center">LMMRotate 🎮: A Simple Aerial Detection Baseline of Multimodal Language Models</h1>
<p align="center">
<a href='https://scholar.google.com/citations?hl=en&user=TvsTun4AAAAJ' style='text-decoration: none' >Qingyun Li</a><sup></sup> 
<a href='https://scholar.google.com/citations?user=A39S7JgAAAAJ&hl=en' style='text-decoration: none' >Yushi Chen</a><sup></sup> 
<a href='https://www.researchgate.net/profile/Shu-Xinya' style='text-decoration: none' >Xinya Shu</a><sup></sup> 
<a href='https://scholar.google.com/citations?hl=en&user=UzPtYnQAAAAJ' style='text-decoration: none' >Dong Chen</a><sup></sup> 
<a href='https://scholar.google.com/citations?hl=en&user=WQgE8l8AAAAJ' style='text-decoration: none' >Xin He</a><sup></sup> 
<a href='https://scholar.google.com/citations?user=OYtSc4AAAAAJ&hl=en' style='text-decoration: none' >Yi Yu</a><sup></sup> 
<a href='https://yangxue0827.github.io/' style='text-decoration: none' >Xue Yang</a><sup></sup> 
<p align='center'>
If you find our work helpful, please consider giving us a ⭐!
</p>
</p>
</p>
- ArXiv Paper: https://arxiv.org/abs/2501.09720
- GitHub Repo: https://github.com/Li-Qingyun/mllm-mmrotate
- HuggingFace Page: https://huggingface.co/collections/Qingyun/lmmrotate-6780cabaf49c4e705023b8df
This repo hosts all the available checkpoints of Florence-2 trained for aerial detection with LMMRotate in [our paper](https://arxiv.org/abs/2501.09720).
LMMRotate is a technical practice to fine-tune Large Multimodal language Models for oriented object detection as in MMRotate and hosts the official implementation of the paper: A Simple Aerial Detection Baseline of Multimodal Language Models.
<img src="https://github.com/user-attachments/assets/d34e4c0c-9e04-446e-a511-2e7005e32074" alt="framework" width="100%" />
See the list of available checkpoint [here](https://huggingface.co/Qingyun/Florence-2-models-lmmrotate/tree/main).
The folder is named `{base_model}_vis{vision_input_size}-lang{max_language_input_length}_{dataset_name}-{annotation_version}_b{samples_per_gpu}x{num_gpus}-{num_epoch}e-{note}`
For example:
> `florence-2-b_vis1024-lang2048_dota1-train-v2_b2x16-100e-slurm-zero2`:
> - **base_model**: Microsoft/Florence-2-base
> - **vision input size**: 1024 \times 1024
> - **max language input length**: 2048
> - **aerial detection source dataset name**: dota-train (`train` split of `split_ss_dota`)
> - **annotation version**: v2 (the users should ignore this)
> - **batch size and resources**: 2x16gpus = 32
> - **schedule**: 100 epochs
> - **note**: the model is trained on a slurm cluster and accelerated with DeepSpeed ZeRO2
## Downloading Guide
You can download with your web browser on [the file page](https://huggingface.co/datasets/Qingyun/Florence-2-models-lmmrotate/tree/main).
We recommand downloading in terminal using huggingface-cli (`pip install --upgrade huggingface_cli`). You can refer to [the document](https://huggingface.co/docs/huggingface_hub/guides/download) for more usages.
```
# Set Huggingface Mirror for Chinese users (if required):
export HF_ENDPOINT=https://hf-mirror.com
# Download a certain checkpoint:
huggingface-cli download Qingyun/Florence-2-models-lmmrotate <checkpoint_folder_name> --repo-type model --local-dir checkpoint/
# If any error (such as network error) interrupts the downloading, you just need to execute the same command, the latest huggingface_hub will resume downloading.
```
## Detection Performance

## Cite
LMMRotate paper:
```
@article{li2025lmmrotate,
title={A Simple Aerial Detection Baseline of Multimodal Language Models},
author={Li, Qingyun and Chen, Yushi and Shu, Xinya and Chen, Dong and He, Xin and Yu Yi and Yang, Xue },
journal={arXiv preprint arXiv:2501.09720},
year={2025}
}
``` |