Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,16 @@ base_model:
|
|
6 |
- google/vit-base-patch16-224
|
7 |
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
8 |
library_name: transformers
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- google/vit-base-patch16-224
|
7 |
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
8 |
library_name: transformers
|
9 |
+
---
|
10 |
+
|
11 |
+
<p align="center">
|
12 |
+
<img src="https://github.com/mkturkcan/deepseek-vlm/blob/main/assets/logo.png?raw=true" width="180" />
|
13 |
+
</p>
|
14 |
+
|
15 |
+
<h3 align="center">
|
16 |
+
<p>Deepseek-VLM: Vision Language Models with Reasoning</p>
|
17 |
+
</h3>
|
18 |
+
|
19 |
+
Vision language models with chain-of-thought reasoning are just starting to emerge. Over the last few weeks, we have been working on an easy-to-run training platform for vision models. Deepseek-VLM is a proof-of-concept to train a model in a short amount of time.
|
20 |
+
|
21 |
+
Note that this is just the first checkpoint of a model currently under training.
|