kunato commited on
Commit
ee6423c
·
verified ·
1 Parent(s): ee1b237

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ language:
4
+ - en
5
+ - th
6
+ ---
7
+ **Typhoon-v1.5-8B: Thai Large Language Model (Instruct)**
8
+
9
+ **Typhoon-v1.5-8B-instruct** is a *instruct* Thai 🇹🇭 large language model with 8 billion parameters, and it is based on Llama3-8B.
10
+
11
+ ![Typhoon 1.5 8b benchmark](https://storage.googleapis.com/typhoon-public/assets/1.5-8b-benchmark.png)
12
+
13
+ For release post, please see our blog.
14
+
15
+ ## **Model Description**
16
+
17
+ - **Model type**: A 8B instruct decoder-only model based on Llama architecture.
18
+ - **Requirement**: transformers 4.38.0 or newer.
19
+ - **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
20
+ - **License**: [Llama 3 Community License](https://llama.meta.com/llama3/license/)
21
+
22
+ ## **Performance**
23
+
24
+ | Model | ONET | IC | TGAT | TPAT-1 | A-Level | Average (ThaiExam) | M3Exam | MMLU |
25
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- |
26
+ | Typhoon-1.0 (mistral) | 0.379 | 0.393 | 0.700 | 0.414 | 0.324 | 0.442 | 0.391 | 0.5469 |
27
+ | Typhoon-1.5 8B (llama3) | ***0.446*** | ***0.431*** | ***0.722*** | ***0.526*** | ***0.407*** | ***0.5028*** | ***0.46*** | ***0.6136*** |
28
+ | Salior 7B | 0.372 | 0.379 | 0.678 | 0.405 | 0.396 | 0.446 | 0.411 | 0.5528 |
29
+ | SeaLLM 2.0 7B | 0.327 | 0.311 | 0.656 | 0.414 | 0.321 | 0.4058 | 0.354 | 0.5792 |
30
+ | OpenThaiGPT 1.0.0b 7B | 0.238 | 0.249 | 0.444 | 0.319 | 0.289 | 0.3078 | 0.268 | 0.3693 |
31
+ | SambaLingo-Thai-Chat 7B | 0.251 | 0.241 | 0.522 | 0.302 | 0.262 | 0.3156 | 0.309 | 0.3879 |
32
+
33
+
34
+ ## Usage Example
35
+
36
+ ```python
37
+ from transformers import AutoTokenizer, AutoModelForCausalLM
38
+ import torch
39
+
40
+ model_id = "scb10x/typhoon-v1.5-8b-instruct"
41
+
42
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
43
+ model = AutoModelForCausalLM.from_pretrained(
44
+ model_id,
45
+ torch_dtype=torch.bfloat16,
46
+ device_map="auto",
47
+ )
48
+
49
+ messages = [
50
+ {"role": "system", "content": "You are a helpful assistant who're always speak Thai."},
51
+ {"role": "user", "content": "ขอสูตรไก่ย่าง"},
52
+ ]
53
+
54
+ input_ids = tokenizer.apply_chat_template(
55
+ messages,
56
+ add_generation_prompt=True,
57
+ return_tensors="pt"
58
+ ).to(model.device)
59
+
60
+ terminators = [
61
+ tokenizer.eos_token_id,
62
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
63
+ ]
64
+
65
+ outputs = model.generate(
66
+ input_ids,
67
+ max_new_tokens=512,
68
+ eos_token_id=terminators,
69
+ do_sample=True,
70
+ temperature=0.4,
71
+ top_p=0.9,
72
+ )
73
+ response = outputs[0][input_ids.shape[-1]:]
74
+ print(tokenizer.decode(response, skip_special_tokens=True))
75
+ ```
76
+
77
+ ## Chat Template
78
+
79
+ We use llama3 chat-template.
80
+
81
+ ```python
82
+ {% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}
83
+ ```
84
+
85
+ ## **Intended Uses & Limitations**
86
+
87
+ This model is an instructional model. However, it’s still undergoing development. It incorporates some level of guardrails, but it still may produce answers that are inaccurate, biased, or otherwise objectionable in response to user prompts. We recommend that developers assess these risks in the context of their use case.
88
+
89
+ ## **Follow us**
90
+
91
+ **https://twitter.com/opentyphoon**
92
+
93
+ ## **Support**
94
+
95
+ **https://discord.gg/CqyBscMFpg**
96
+
97
+ ## **SCB10X AI Team**
98
+
99
+ - Kunat Pipatanakul, Potsawee Manakul, Sittipong Sripaisarnmongkol, Pathomporn Chokchainant, Kasima Tharnpipitchai
100
+ - If you find Typhoon-8B useful for your work, please cite it using:
101
+
102
+ ```
103
+ @article{pipatanakul2023typhoon,
104
+ title={Typhoon: Thai Large Language Models},
105
+ author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
106
+ year={2023},
107
+ journal={arXiv preprint arXiv:2312.13951},
108
+ url={https://arxiv.org/abs/2312.13951}
109
+ }
110
+ ```
111
+
112
+ ## **Contact Us**
113
+
114
+ - General & Collaboration: **[[email protected]](mailto:[email protected])**, **[[email protected]](mailto:[email protected])**
115
+ - Technical: **[[email protected]](mailto:[email protected])**