File size: 15,680 Bytes
d435a50 effaaf5 3648ead effaaf5 3648ead d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 effaaf5 d435a50 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 |
---
language:
- en
- zh
- vi
- id
- th
- ms
- km
- lo
- my
- tl
license: other
tags:
- multilingual
- sea
license_name: seallms
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
---
<p align="center">
<img src="seal_logo.png" width="200" />
</p>
# *SeaLLM-7B-v2.5* - Large Language Models for Southeast Asia
<p align="center">
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
<a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5" target="_blank" rel="noopener"> ๐ค Tech Memo</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5" target="_blank" rel="noopener"> ๐ค DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
<a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a>
</p>
๐ฅ<span style="color: #ff3860">[HOT]</span> SeaLLMs project now has a dedicated website - [damo-nlp-sg.github.io/SeaLLMs](https://damo-nlp-sg.github.io/SeaLLMs/)
We introduce [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languagesย ๐ฌ๐ง ๐จ๐ณ ๐ป๐ณ ๐ฎ๐ฉ ๐น๐ญ ๐ฒ๐พ ๐ฐ๐ญ ๐ฑ๐ฆ ๐ฒ๐ฒ ๐ต๐ญ. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc.
### Highlights
* [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5) outperforms GPT-3.5 and achieves 7B SOTA on most multilingual knowledge benchmarks for SEA languages (MMLU, M3Exam & VMLU).
* It achieves 79.0 and 34.9 on GSM8K and MATH, surpassing GPT-3.5 in MATH.
### Release and DEMO
- DEMO:
- [SeaLLMs/SeaLLM-7B-v2.5](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5).
- [SeaLLMs/SeaLLM-7B | SeaLMMM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B) - Experimental multimodal SeaLLM.
- Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf).
- Model weights:
- [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5).
- [SeaLLM-7B-v2.5-GGUF](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF).
- Run locally:
- [LM-studio](https://lmstudio.ai/):
- [SeaLLM-7B-v2.5-q4_0-chatml](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF/blob/main/seallm-7b-v2.5-chatml.Q4_K_M.gguf) with ChatML template (`<eos>` token changed to `<|im_end|>`)
- [SeaLLM-7B-v2.5-q4_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF/blob/main/seallm-7b-v2.5.Q4_K_M.gguf) - must use SeaLLM-7B-v2.5 chat format.
- [MLX for Apple Silicon](https://github.com/ml-explore/mlx): [SeaLLMs/SeaLLM-7B-v2.5-mlx-quantized](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-mlx-quantized)
- Previous models:
- [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2)
- [SeaLLM-7B-v1](https://huggingface.co/SeaLLMs/SeaLLM-7B-v1)
<blockquote style="color:red">
<p><strong style="color: red">Terms of Use and License</strong>:
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>.
</blockquote>
> **Disclaimer**:
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
> The logo was generated by DALL-E 3.
### What's new since SeaLLM-7B-v2?
* SeaLLM-7B-v2.5 was built on top of Gemma-7b, and underwent large scale SFT and carefully designed alignment.
## Evaluation
### Multilingual World Knowledge
We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi.
| Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e
|-----| ----- | --- | -- | ----- | ---- | --- | --- | --- |
| GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41
| Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27
| Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25
| SailorLM | Multi | 52.72 | 59.76 | 67.74 | 50.14 | --- | 39.53 | 37.73
| SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52
| SeaLLM-7B-v2.5 | Multi | 64.05 | 76.87 | 62.54 | 63.11 | 53.30 | 48.64 | 46.86
### Zero-shot CoT Multilingual Math Reasoning
<!--
[SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.5** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (๐จ๐ณ ๐ป๐ณ ๐ฎ๐ฉ ๐น๐ญ). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **28.4** vs 18.1 scores.
![fig_sea_math_side_by_side.png](fig_sea_math_side_by_side.png)
-->
| Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1
| Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6.0
| Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | |
| Qwen1.5-7B-chat | 56.8 | 15.3 | 40.0 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 | 4.7
| SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4
| SeaLLM-7B-v2.5 | 78.5 | 34.9 | 51.3 | 22.1 | 72.3 | 30.2 | 71.5 | 30.1 | 62.0 | 28.4
Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)).
#### Zero-shot MGSM
[SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Thai.
| Model | MGSM-Zh | MGSM-Th
|-----| ----- | ---
| ChatGPT (reported) | 61.2 | 47.2
| Qwen-14B-chat | 59.6 | 28
| SeaLLM-7B-v2 | **64.8** | 62.4
| SeaLLM-7B-v2.5 | 58.0 | **64.8**
### Sea-Bench
![fig_sea_bench_side_by_side.png](fig_sea_bench_side_by_side.png)
### Usage
**IMPORTANT NOTICE for using the model**
* `<bos>` must be at start of prompt, ff your code's tokenizer does not prepend `<bos>` by default, you MUST prepend <bos> into the prompt yourself, otherwise, it would not work!
* Repitition penalty (e.g: in llama.cpp, ollama, LM-studio) must be set to **1** , otherwise will lead to degeneration!
#### Instruction format
```python
# ! WARNING, if your code's tokenizer does not prepend <bos> by default,
# You MUST prepend <bos> into the prompt yourself, otherwise, it would not work!
prompt = """<|im_start|>system
You are a helpful assistant.<eos>
<|im_start|>user
Hello world<eos>
<|im_start|>assistant
Hi there, how can I help?<eos>"""
# <|im_start|> is not a special token.
# Transformers chat_template should be consistent with vLLM format below.
# ! ENSURE 1 and only 1 bos `<bos>` at the beginning of sequence
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))
"""
```
#### Using transformers's chat_template
Install the latest transformers (>4.40)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
# use bfloat16 to ensure the best performance.
model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2.5", torch_dtype=torch.bfloat16, device_map=device)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2.5")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello world"},
{"role": "assistant", "content": "Hi there, how can I help you today?"},
{"role": "user", "content": "Explain general relativity in details."}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
print(tokenizer.convert_ids_to_tokens(encodeds[0]))
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
#### Using vLLM
```python
from vllm import LLM, SamplingParams
TURN_TEMPLATE = "<|im_start|>{role}\n{content}<eos>\n"
TURN_PREFIX = "<|im_start|>{role}\n"
def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None):
# conversations: list of dict with key `role` and `content` (openai format)
if conversations[0]['role'] != 'system' and system_prompt is not None:
conversations = [{"role": "system", "content": system_prompt}] + conversations
text = ''
for turn_id, turn in enumerate(conversations):
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
text += prompt
if add_assistant_prefix:
prompt = TURN_PREFIX.format(role='assistant')
text += prompt
return text
sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['<eos>', '<|im_start|>'])
llm = LLM("SeaLLMs/SeaLLM-7B-v2.5", dtype="bfloat16")
message = "Explain general relativity in details."
prompt = seallm_chat_convo_format(message, True)
gen = llm.generate(prompt, sampling_params)
print(gen[0].outputs[0].text)
```
#### Fine-tuning SeaLLM-7B-v2.5
Should follow the chat format and accurately mask out source tokens. Here is an example.
```python
conversations = [
{"role": "system", "content": "You are helful assistant."},
{"role": "user", "content": "Hello world."},
{"role": "assistant", "content": "Hi there, how can I help?"},
{"role": "user", "content": "Tell me a joke."},
{"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."},
]
def seallm_7b_v25_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False):
"""
Inputs:
conversations: list of dict following openai format, eg
conversations = [
{"role": "system", "content": "You are helful assistant."},
{"role": "user", "content": "Hello world."},
{"role": "assistant", "content": "Hi there, how can I help?"},
{"role": "user", "content": "Tell me a joke."},
{"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."},
]
add_assistant_prefix: whether to add assistant_prefix, only for inference decoding
Outputs:
tokenize_output_sample, {
"input_ids": ...
"token_type_ids": 1 if train and 0 if masked out (not train)
}
During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations.
labels = sample['input_ids'].clone()
labels[sample['token_type_ids'] == 0] = -100
"""
TURN_TEMPLATE = "<|im_start|>{role}\n{content}<eos>\n"
TURN_PREFIX = "<|im_start|>{role}\n"
TURN_SUFFIX = "<eos>\n"
TURN_SUFFIX_TAKE = "<eos>"
sample = None
assistant_prefix_len = None
assistant_suffix_len = None
for turn_id, turn in enumerate(conversations):
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
turn_sample = tokenizer(
prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False,
return_token_type_ids=True,
)
if turn['role'] == 'assistant':
if assistant_prefix_len is None:
assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False))
if assistant_suffix_len is None:
assistant_suffix_len = (
len(tokenizer.encode(TURN_SUFFIX.format(role=turn['role']), add_special_tokens=False)) -
len(tokenizer.encode(TURN_SUFFIX_TAKE, add_special_tokens=False))
)
turn_sample['token_type_ids'][assistant_prefix_len:-assistant_suffix_len] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len - assistant_suffix_len)
if sample is None:
sample = turn_sample
else:
for k in turn_sample.keys():
sample[k].extend(turn_sample[k])
if add_assistant_prefix:
assistant_prefix_sample = tokenizer(
TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False,
return_token_type_ids=True,
)
for k in sample.keys():
sample[k].extend(assistant_prefix_sample[k])
if tokenizer.add_bos_token:
sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids']
sample['attention_mask'] = [1] + sample['attention_mask']
sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids']
return sample
# ! testing
sample = seallm_7b_v25_tokenize_multi_turns(tokenizer, conversations)
tokens = tokenizer.convert_ids_to_tokens(sample['input_ids'])
pairs = [(x, y) for x, y in zip(tokens, sample['token_type_ids'])]
print(pairs)
# source and special tokens is masked out (token_type 0), only assistant with <eos> is trained (token_type 1)
# [('<bos>', 0), ('<', 0), ('|', 0), ..., ('assistant', 0), ('\n', 0), ('Hi', 1), ('โthere', 1), (',', 1), ('โhow', 1), ('โcan', 1), ('โI', 1), ('โhelp', 1), ('?', 1), ('<eos>', 1), ('\n', 0), ('<', 0), ...
```
## Acknowledgement to Our Linguists
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [[email protected]](mailto:[email protected])
**Author list and order will change!**
* `*` and `^` are equal contributions.
```
@article{damonlpsg2023seallm,
author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, Weiwen Xu, Hou Pong Chan,
Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang,
Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
Chaoqun Liu, Hang Zhang, Lidong Bing},
title = {SeaLLMs - Large Language Models for Southeast Asia},
year = 2023,
Eprint = {arXiv:2312.00738},
}
```
|