File size: 1,312 Bytes
2a91e0f
 
 
 
 
 
0b8e32d
2a91e0f
fa4c185
d22cbdd
 
 
 
 
 
 
4eb5142
d22cbdd
 
1bf349d
 
 
 
 
 
65b88b2
 
 
 
 
 
 
 
1bf349d
 
 
fa4c185
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: apache-2.0
datasets:
- 5CD-AI/LLaVA-CoT-o1-Instruct
base_model:
- google/vit-base-patch16-224
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
pipeline_tag: image-text-to-text
---

<p align="center">
  <img src="https://github.com/mkturkcan/deepseek-vlm/blob/main/assets/logo.png?raw=true"  width="180" />
</p>

<h3 align="center">
  <p>DeepSeer: Vision Language Models with Reasoning</p>
</h3>

Vision language models with chain-of-thought reasoning are just starting to emerge. This is a proof-of-concept to train a vision model with thinking-enabled chat templates based on DeepSeek-R1 models.

## Setup
```bash
pip install git+https://github.com/facebookresearch/schedule_free.git
pip install peft
git clone https://github.com/mkturkcan/seers.git
cd seers/seers/
git clone https://huggingface.co/mehmetkeremturkcan/DeepSeer-R1-Vision-Distill-Qwen-1.5B_google_vit-base-patch16-224
```
## Test
Run
```bash
python predict.py
```
## Training Details
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [5CD-AI/LLaVA-CoT-o1-Instruct](https://huggingface.co/datasets/5CD-AI/LLaVA-CoT-o1-Instruct) dataset.
It has been trained using [seers](https://github.com/mkturkcan/seers).