Update README.md
Browse files
README.md
CHANGED
@@ -48,6 +48,7 @@ We introduce and [showcase](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B) the
|
|
48 |
* SeaLMMM-7B is one of the strongest 7B vision-language models at **text-only tasks**, with performance similar to [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2). It is a text-first-vision-second model.
|
49 |
* SeaLMMM-7B **is** able to handle most SEA languages, making it more multilingual than En-only LLava, Bilingual (En+Zh) Qwen-VL or Yi-VL.
|
50 |
* Unlike LLava or specialized VLMs, which demand only one image at the begining, SeaLMMM-7B can seamlessly handle text-only conversations at the begining and visual instructions in the middle of the conversations and support topic and language switching.
|
|
|
51 |
|
52 |
|
53 |
### Release and DEMO
|
@@ -76,7 +77,9 @@ By using our released weights, codes, and demos, you agree to and comply with th
|
|
76 |
|
77 |
## Overview
|
78 |
|
79 |
-
SeaLMMM-7B-v0.1 is a multimodal extension of SeaLLM-7B-v2
|
|
|
|
|
80 |
|
81 |
|
82 |
### English Vision QA Tasks
|
|
|
48 |
* SeaLMMM-7B is one of the strongest 7B vision-language models at **text-only tasks**, with performance similar to [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2). It is a text-first-vision-second model.
|
49 |
* SeaLMMM-7B **is** able to handle most SEA languages, making it more multilingual than En-only LLava, Bilingual (En+Zh) Qwen-VL or Yi-VL.
|
50 |
* Unlike LLava or specialized VLMs, which demand only one image at the begining, SeaLMMM-7B can seamlessly handle text-only conversations at the begining and visual instructions in the middle of the conversations and support topic and language switching.
|
51 |
+
* SeaLMMM-7B can carry multi-image generation or in-context visual learning, in which case, [Better llava next](https://github.com/huggingface/transformers/pull/29850) should be applied to enable such feature.
|
52 |
|
53 |
|
54 |
### Release and DEMO
|
|
|
77 |
|
78 |
## Overview
|
79 |
|
80 |
+
SeaLMMM-7B-v0.1 is a multimodal extension of [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2).
|
81 |
+
It adopts the [Llava-1.6](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) (Llava-NEXT) architecture.
|
82 |
+
It is trained by jointly train SeaLLM's multilingual text-only datasets along with Llava-1.5 English-only vision data, as well as in-house synthetically generated multilingual multimodal vision data and open-source data, such as [ThaiIDCardSynt](https://huggingface.co/datasets/matichon/ThaiIDCardSynt).
|
83 |
|
84 |
|
85 |
### English Vision QA Tasks
|