|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- Lin-Chen/ShareGPT4V |
|
- liuhaotian/LLaVA-Pretrain |
|
- liuhaotian/LLaVA-Instruct-150K |
|
language: |
|
- en |
|
- zh |
|
tags: |
|
- llava |
|
- vision-language |
|
- llm |
|
- lmm |
|
--- |
|
<h2 align="center"> <a href="https://arxiv.org/abs/2402.14289">TinyLLaVA: A Framework of Small-scale Large Multimodal Models</a> |
|
|
|
<h5 align="center"> |
|
|
|
[![github](https://img.shields.io/badge/GitHub-TinyLLaVA-blue)](https://github.com/DLCV-BUAA/TinyLLaVABench) [![arXiv](https://img.shields.io/badge/Arxiv-2402.14289-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2402.14289) [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/LICENSE) |
|
|
|
|
|
## 🎉 News |
|
* **[2024.02.25]** Update evaluation scripts and docs! |
|
* **[2024.02.25]** Data descriptions out. Release TinyLLaVA-1.5B and TinyLLaVA-2.0B! |
|
* **[2024.02.24]** Example code on inference and model loading added! |
|
* **[2024.02.23]** Evaluation code and scripts released! |
|
* **[2024.02.21]** Creating the [TinyLLaVABench](https://github.com/DLCV-BUAA/TinyLLavaBench) repository on GitHub! |
|
* **[2024.02.21]** Our paper: [TinyLLaVA: A Framework of Small-scale Large Multimodal Models](https://arxiv.org/abs/2402.14289) is out! |
|
* **[2024.01.11]** Our fist model [TinyLLaVA-1.4B](https://huggingface.co/bczhou/tiny-llava-v1-hf) is out! |
|
|
|
## ⌛ TODO |
|
- [ ] Add support for Ollama and llama.cpp. |
|
- [ ] Developers' guide / How to build demo locally. |
|
- [x] Model Zoo descriptions. |
|
- [x] Examples and inference. |
|
- [x] Release code for training. |
|
- [x] Add descriptions for evaluation. |
|
- [x] Add descriptions for data preparation. |
|
- [x] Release TinyLLaVA-1.5B and TinyLLaVA-2.0B. |
|
- [x] Release TinyLLaVA-3.1B. |
|
- [x] Release the evaluation code and weights today(2024.2.23). |
|
### 🔥 High performance, but with fewer parameters |
|
|
|
- Our best model, TinyLLaVA-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL. |
|
|
|
## 🐳 Model Zoo |
|
### Legacy Model |
|
- [tiny-llava-hf](https://huggingface.co/bczhou/tiny-llava-v1-hf) |
|
|
|
### Pretrained Models |
|
- [TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) |
|
- [TinyLLaVA-2.0B](https://huggingface.co/bczhou/TinyLLaVA-2.0B) |
|
- [TinyLLaVA-1.5B](https://huggingface.co/bczhou/TinyLLaVA-1.5B) |
|
|
|
### Model Details |
|
| Name | LLM | Checkpoint | LLaVA-Bench-Wild | MME | MMBench | MM-Vet | SQA-image | VQA-v2 | GQA | TextVQA | |
|
|---------------|-------------------|------------------------------------------------|------------------|----------|---------|--------|-----------|--------|-------|---------| |
|
| TinyLLaVA-3.1B | Phi-2 | [TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) | 75.8 | 1464.9 | 66.9 | 32.0 | 69.1 | 79.9 | 62.0 | 59.1 | |
|
| TinyLLaVA-2.0B | StableLM-2-1.6B | [TinyLLaVA-2.0B](https://huggingface.co/bczhou/TinyLLaVA-2.0B) | 66.4 | 1433.8 | 63.3 | 32.6 | 64.7 | 78.9 | 61.9 | 56.4 | |
|
| TinyLLaVA-1.5B | TinyLlama | [TinyLLaVA-1.5B](https://huggingface.co/bczhou/TinyLLaVA-1.5B) | 60.8 | 1276.5 | 55.2 | 25.8 | 60.3 | 76.9 | 60.3 | 51.7 | |
|
|
|
|
|
|
|
## 🔧 Requirements and Installation |
|
|
|
We recommend the requirements as follows. |
|
|
|
1. Clone this repository and navigate to LLaVA folder |
|
```bash |
|
git clone https://github.com/DLCV-BUAA/TinyLLaVABench.git |
|
cd TinyLLaVABench |
|
``` |
|
|
|
2. Install Package |
|
```Shell |
|
conda create -n tinyllava python=3.10 -y |
|
conda activate tinyllava |
|
pip install --upgrade pip # enable PEP 660 support |
|
pip install -e . |
|
``` |
|
|
|
3. Install additional packages for training cases |
|
```Shell |
|
pip install -e ".[train]" |
|
pip install flash-attn --no-build-isolation |
|
``` |
|
### Upgrade to latest code base |
|
|
|
```Shell |
|
git pull |
|
pip install -e . |
|
|
|
# if you see some import errors when you upgrade, please try running the command below (without #) |
|
# pip install flash-attn --no-build-isolation --no-cache-dir |
|
``` |
|
|
|
|
|
## 🔧 Quick Start |
|
|
|
<details> |
|
<summary>Load model</summary> |
|
|
|
```Python |
|
from tinyllava.model.builder import load_pretrained_model |
|
from tinyllava.mm_utils import get_model_name_from_path |
|
from tinyllava.eval.run_tiny_llava import eval_model |
|
|
|
model_path = "bczhou/TinyLLaVA-3.1B" |
|
|
|
tokenizer, model, image_processor, context_len = load_pretrained_model( |
|
model_path=model_path, |
|
model_base=None, |
|
model_name=get_model_name_from_path(model_path) |
|
) |
|
``` |
|
</details> |
|
|
|
## 🔧 Run Inference |
|
Here's an example of running inference with [TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) |
|
<details> |
|
<summary>Run Inference</summary> |
|
|
|
```Python |
|
from tinyllava.model.builder import load_pretrained_model |
|
from tinyllava.mm_utils import get_model_name_from_path |
|
from tinyllava.eval.run_tiny_llava import eval_model |
|
|
|
model_path = "bczhou/TinyLLaVA-3.1B" |
|
prompt = "What are the things I should be cautious about when I visit here?" |
|
image_file = "https://llava-vl.github.io/static/images/view.jpg" |
|
|
|
args = type('Args', (), { |
|
"model_path": model_path, |
|
"model_base": None, |
|
"model_name": get_model_name_from_path(model_path), |
|
"query": prompt, |
|
"conv_mode": "phi", |
|
"image_file": image_file, |
|
"sep": ",", |
|
"temperature": 0, |
|
"top_p": None, |
|
"num_beams": 1, |
|
"max_new_tokens": 512 |
|
})() |
|
|
|
eval_model(args) |
|
``` |
|
</details> |
|
|
|
### Important |
|
We use different `conv_mode` for different models. Replace the `conv_mode` in `args` according to this table: |
|
| model | conv_mode | |
|
|-------------------|---------------| |
|
| TinyLLaVA-3.1B | phi | |
|
| TinyLLaVA-2.0B | phi | |
|
| TinyLLaVA-1.5B | v1 | |
|
|
|
## Evaluation |
|
To ensure the reproducibility, we evaluate the models with greedy decoding. |
|
|
|
See [Evaluation.md](https://github.com/DLCV-BUAA/TinyLLaVABench/blob/main/docs/Evaluation.md) |
|
|
|
|
|
## ✏ Citation |
|
|
|
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. |
|
|
|
```BibTeX |
|
@misc{zhou2024tinyllava, |
|
title={TinyLLaVA: A Framework of Small-scale Large Multimodal Models}, |
|
author={Baichuan Zhou and Ying Hu and Xi Weng and Junlong Jia and Jie Luo and Xien Liu and Ji Wu and Lei Huang}, |
|
year={2024}, |
|
eprint={2402.14289}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG} |
|
} |
|
``` |
|
|