File size: 8,764 Bytes
077c7c4
 
 
4672b7d
077c7c4
 
 
82ba93c
19bac74
82ba93c
 
 
4bc24d3
4676253
82ba93c
 
4bc24d3
82ba93c
 
d02da41
82ba93c
4bc24d3
82ba93c
4bc24d3
82ba93c
 
 
 
4bc24d3
82ba93c
f0dd97b
82ba93c
 
 
 
4bc24d3
82ba93c
 
 
 
 
 
 
 
 
 
 
 
 
 
67eb1be
 
dcca95c
67eb1be
 
 
 
 
 
 
82ba93c
4bc24d3
1f4be2e
 
 
 
 
 
 
 
 
 
 
 
 
 
82ba93c
 
 
efecdee
82ba93c
efecdee
82ba93c
efecdee
 
82ba93c
efecdee
d02da41
82ba93c
 
 
d02da41
efecdee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82ba93c
 
efecdee
82ba93c
efecdee
82ba93c
efecdee
 
82ba93c
efecdee
d02da41
82ba93c
 
 
d02da41
82ba93c
efecdee
 
82ba93c
 
efecdee
 
 
 
82ba93c
efecdee
82ba93c
efecdee
 
 
 
 
 
82ba93c
efecdee
 
82ba93c
 
 
 
 
 
 
 
d02da41
 
82ba93c
d02da41
 
 
 
 
 
d01c6f0
82ba93c
 
 
 
 
 
1c38d48
82ba93c
 
 
 
 
 
 
 
d02da41
 
 
 
 
 
 
 
 
 
 
82ba93c
 
 
 
 
 
 
 
d02da41
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
---
library_name: transformers
tags: []
license: llama3.2
---


<a href="https://github.com/MLP-Lab/Bllossom">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/64a90711c05da19ca834f690/a0VE5UCY1HCEhaHtp3mGa.png" alt="image" width="30%" height="30%">
</a>

# Update!
* [2024.12.12] 추가설명: KMMLU, KoBEST, LogicKor 등 벤치 관련 학습/테스트/유사 데이터를 전혀 사용하지 않았습니다. 벤치데이터 증강해가 쓰까서 학습하면 SOTA 성능 근접하게 나옵니다 모델위에 해보세요!
* [2024.12.06] Bllossom-AICA-5B 모델 최초 업데이트!


# Bllossom [추론코드예제](https://drive.google.com/file/d/1AoxfoV0TSN-pGdc9fa3dRv3-NLZknHlJ/view?usp=sharing) | [학습코드예제](https://drive.google.com/file/d/1E_fYV-tUhl1LExm2piaIhvXfuOcNaZmP/view?usp=sharing) | [튜토리얼 영상](https://youtu.be/4lAUVwTN608)

```bash
저희 Bllossom 팀에서 llama3.2-3B 기반의 한국어-영어 언어모델 Bllossom-AICA-5B를 공개합니다.
이번 Bllossom-AICA는 다음과 같은 특징을 보입니다.
 - 일반 언어모델, 시각-언어모델 양방향으로 활용이 가능한 최초의 llama기반 3B확장 모델입니다. (코랩 무료 GPU에서 사용가능한 유일한 시각-언어 한국어 )
 - 이미지를 넣으면 시각-언어모델, 넣지 않으면 언어모델로 작동하며 시각-언어, 그냥 언어모델 양방향모두 학습 및 추론이 가능합니다.
 - 시각 정보의 이해를 바탕으로 언어모델의 성능이 대폭 향상되었습니다. (정성평가 기준 Bllossom-3.2-3B모델 대비 20%이상)
 - 한국어 OCR, 표, 그래프 해석에 최적화 되어있습니다.
 - 외부지식에 대한 선택적 추론 기능이 학습되었습니다. RAG를 활용할 때 질문과 관련 없는 오류가 섞인 정보의 경우 모델 스스로 활용하지 않습니다.

해당 모델에 활용된 데이터는 다음과 같습니다.
 - Huggingface에 공개된 한국어 LLM 사전학습 데이터를 거의 모두 활용해 Full tuning 했습니다.
 - AI-Hub, KISTI AI데이터, Huggingface에 공개된 거의 모든 한국어 시각-언어 관련 학습데이터를 활용해 시각-언어모델 사전학습을 했습니다. (다 나열하기 너무 많아요...)
 - 저희 연구실에서 자체 제작한 한국어 시각-언어 Instruction Tuning데이터를 활용했습니다.

언제나 그랬듯 해당 모델은 상업적 이용이 가능합니다.

1. Bllossom-AICA의 외부지식 지식추론 기능은 COLING2025에 발표될 예정입니다.
2. 3B기반 모델이 이정도면 8B기반 모델은 어느정도인지 궁금하지 않으세요? 좋은 언어모델 계속 업데이트 하겠습니다!!
```

```bash
We, the Bllossom team, are pleased to announce the release of Bllossom-Vision, a Korean-English vision-language model based on llama3.2. This Bllossom-Vision is a preview version and features the following:
 - It can be utilized both as a general language model and as a vision-language model.
 - It operates as a vision-language model when an image is provided, and as a language model when no image is provided. It is capable of both training and inference in both directions, whether as a vision-language or just a language model.
 - We have put significant effort into ensuring it remains faithful to the role of a vision-language model while maintaining the performance of a traditional language model as much as possible.
 - It is a fully bilingual model that does not compromise English performance at all.
```
**Bllossom is developed by [MLPLab at Seoultech](http://mlp.seoultech.ac.kr), [Teddysum](http://teddysum.ai/) and [Yonsei Univ](https://sites.google.com/view/hansaemkim/hansaem-kim)**


## Demo Video

<div style="display: flex; justify-content: space-between;">
  <!-- 두 번째 컬럼 -->
  <div style="width: 100%;">
    <a>
      <img src="https://cdn-uploads.huggingface.co/production/uploads/64a90711c05da19ca834f690/BJu5VT_llvYkWk_mkF4x6.gif" style="width: 100%; height: auto;">
    </a>
    <p style="text-align: center;">Bllossom-AIC Demo</p>
  </div>
</div>


## LogicKor Score 
| Category | Single turn | Multi turn |
|---|---|---|
| 추론(Reasoning) | 6.57 | 5.29 |
| 수학(Math) | 6.43 | 6.29 |
| 글쓰기(Writing) | 9.14 | 8.71 |
| 코딩(Coding) | 8.00 | 9.14 |
| 이해(Understanding) | 8.14 | 9.29 |
| 문법(Grammar) | 6.71 | 4.86 |

| Category | Score |
|---|---|
| Single turn | 7.50 |
| Multi turn | 7.26 |
| Overall | 7.38 |

## Example code

### Python code (Use Vision-language Model)
```python
from transformers import MllamaForConditionalGeneration,MllamaProcessor
import torch
from PIL import Image
import requests

model = MllamaForConditionalGeneration.from_pretrained(
  'Bllossom/llama-3.2-Korean-Bllossom-AICA-5B',
  torch_dtype=torch.bfloat16,
  device_map='auto'
)
processor = MllamaProcessor.from_pretrained('Bllossom/llama-3.2-Korean-Bllossom-AICA-5B')

url = "https://t1.daumcdn.net/cfile/tistory/21527E4A543DCABE1D"
image = Image.open(requests.get(url, stream=True).raw)

messages = [
  {'role': 'user','content': [
    {'type':'image'}
    {'type': 'text','text': '이 문서를 마크다운으로 바꿔줘'}
    ]},
  ]

input_text = processor.apply_chat_template(messages,tokenize=False,add_generation_prompt=True)

inputs = processor(
    image,
    input_text,
    add_special_tokens=False,
    return_tensors="pt",
).to(model.device)

output = model.generate(**inputs, max_new_tokens=256,temperature=0.1,eos_token_id=processor.tokenizer.convert_tokens_to_ids('<|eot_id|>'),use_cache=False)
print(processor.decode(output[0]))
```

### Python code (Use Language Model)
```python
from transformers import MllamaForConditionalGeneration,MllamaProcessor
import torch
from PIL import Image
import requests

model = MllamaForConditionalGeneration.from_pretrained(
  'Bllossom/llama-3.2-Korean-Bllossom-AICA-5B',
  torch_dtype=torch.bfloat16,
  device_map='auto'
)
processor = MllamaProcessor.from_pretrained('Bllossom/llama-3.2-Korean-Bllossom-AICA-5B')

url = "https://cdn.discordapp.com/attachments/1156141391798345742/1313407928287494164/E18489E185B3E1848FE185B3E18485E185B5E186ABE18489E185A3E186BA202021-11-1620E1848BE185A9E18492E185AE2011.png?ex=675005f4&is=674eb474&hm=fc9c4231203f53c27f6edd2420961c182dd4a1ed14d4b73e04127f11393729af&"
image = Image.open(requests.get(url, stream=True).raw)

messages = [
  {'role': 'user','content': [
    {'type': 'text','text': '자연어처리 15주치 커리큘럼을 짜줘'}
    ]},
  ]

input_text = processor.apply_chat_template(messages,tokenize=False,add_generation_prompt=True)

inputs = processor(
    images=None,
    text=input_text,
    add_special_tokens=False,
    return_tensors="pt",
).to(model.device)

output = model.generate(**inputs,max_new_tokens=256,temperature=0.1,eos_token_id=processor.tokenizer.convert_tokens_to_ids('<|eot_id|>'),use_cache=False)
print(processor.decode(output[0]))
```


## Supported by

 - AICA  <img src="https://aica-gj.kr/images/logo.png" width="20%" height="20%">

## Citation

**Vision-Language Model**
```text
@misc{VLR-Bench,
  author = {Hyeonseok Lim, Dongjae Shin, Seohyun Song, Inho Won, Minjun Kim, Junghun Yuk, Hangyeol Yoo, Haneol Jang, Kyungtae Lim},
  title = {VLR-Bench: Multilingual Benchmark Dataset for Vision-Language Retrieval Augmented Generation},
  year = {2025},
  publisher = {GitHub},
  journal = {COLING 2025},
  paperLink = {\url{https://arxiv.org/abs/2412.10151}},
 },
}
```

```text
@misc{bllossom-V,
  author = {Dongjae Shin, Hyeonseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim},
  title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment},
  year = {2024},
  publisher = {GitHub},
  journal = {NAACL 2024 findings},
  paperLink = {\url{https://arxiv.org/pdf/2403.11399}},
 },
}
```
**Language Model**
```text
@misc{bllossom,
  author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim},
  title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean},
  year = {2024},
  journal = {LREC-COLING 2024},
  paperLink = {\url{https://arxiv.org/pdf/2403.10882}},
 },
}
```

## Contact
 - 임경태(KyungTae Lim), Professor at Seoultech. `[email protected]`
 - 함영균(Younggyun Hahm), CEO of Teddysum. `[email protected]`
 - 김한샘(Hansaem Kim), Professor at Yonsei. `[email protected]`

## Contributor
 - **신동재(Dongjae Shin)**, [email protected]
 - **유한결(Hangyeol Yoo)**, [email protected]
 - **임현석(Hyeonseok Lim)**, [email protected]