Safetensors
qwen2
reasoning
ptrdvn commited on
Commit
4712fef
·
verified ·
1 Parent(s): baec9b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +249 -53
README.md CHANGED
@@ -1,73 +1,269 @@
1
  ---
2
- library_name: transformers
3
- license: other
4
- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  tags:
6
- - llama-factory
7
- - full
8
- - generated_from_trainer
9
- model-index:
10
- - name: reasoning-multilingual-R1-Llama-70B-train
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # reasoning-multilingual-R1-Llama-70B-train
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) on the reasoning-multilingual-R1-Llama-70B-train dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.4441
 
22
 
23
- ## Model description
24
 
25
- More information needed
26
 
27
- ## Intended uses & limitations
28
 
29
- More information needed
30
 
31
- ## Training and evaluation data
32
 
33
- More information needed
34
 
35
- ## Training procedure
36
 
37
- ### Training hyperparameters
 
38
 
39
- The following hyperparameters were used during training:
40
- - learning_rate: 1e-05
41
- - train_batch_size: 1
42
- - eval_batch_size: 1
43
- - seed: 42
44
- - distributed_type: multi-GPU
45
- - num_devices: 8
46
- - total_train_batch_size: 8
47
- - total_eval_batch_size: 8
48
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
- - lr_scheduler_type: cosine
50
- - lr_scheduler_warmup_ratio: 0.01
51
- - num_epochs: 1.0
52
 
53
- ### Training results
 
 
 
 
54
 
55
- | Training Loss | Epoch | Step | Validation Loss |
56
- |:-------------:|:------:|:----:|:---------------:|
57
- | 0.4962 | 0.1019 | 11 | 0.5000 |
58
- | 0.5313 | 0.2037 | 22 | 0.4791 |
59
- | 0.4692 | 0.3056 | 33 | 0.4685 |
60
- | 0.3876 | 0.4074 | 44 | 0.4595 |
61
- | 0.4768 | 0.5093 | 55 | 0.4542 |
62
- | 0.4985 | 0.6111 | 66 | 0.4496 |
63
- | 0.4687 | 0.7130 | 77 | 0.4465 |
64
- | 0.4484 | 0.8148 | 88 | 0.4449 |
65
- | 0.4809 | 0.9167 | 99 | 0.4442 |
66
 
 
 
 
 
67
 
68
- ### Framework versions
 
 
 
 
 
69
 
70
- - Transformers 4.48.1
71
- - Pytorch 2.5.1+cu124
72
- - Datasets 3.1.0
73
- - Tokenizers 0.21.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - am
4
+ - ar
5
+ - bn
6
+ - zh
7
+ - cs
8
+ - nl
9
+ - en
10
+ - fr
11
+ - de
12
+ - el
13
+ - ha
14
+ - he
15
+ - hi
16
+ - id
17
+ - it
18
+ - ja
19
+ - jv
20
+ - km
21
+ - ko
22
+ - lo
23
+ - ms
24
+ - mr
25
+ - fa
26
+ - pl
27
+ - pt
28
+ - ro
29
+ - ru
30
+ - es
31
+ - sw
32
+ - sv
33
+ - tl
34
+ - ta
35
+ - te
36
+ - th
37
+ - tr
38
+ - uk
39
+ - ur
40
+ - vi
41
+ license: apache-2.0
42
+ datasets:
43
+ - lightblue/reasoning-multilingual-R1-Llama-70B-train
44
  tags:
45
+ - reasoning
 
 
 
 
 
46
  ---
47
 
48
+ # lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual
 
49
 
50
+ <div style="width: 100%; height: 160px;
51
+ display: flex; align-items: center;
52
+ justify-content: center;
53
+ border: 8px solid black;
54
+ font-size: 120px; font-weight: bold;
55
+ text-align: center;
56
+ color: #438db8,
57
+ font-family: 'Helvetica Neue', sans-serif;">
58
+ <span style="color: #438db8;">R1</span>
59
+ &nbsp;
60
+ <span style="color: blue;">m</span>
61
+ <span style="color: green;">u</span>
62
+ <span style="color: purple;">l</span>
63
+ <span style="color: yellow;">t</span>
64
+ <span style="color: pink;">i</span>
65
+ <span style="color: cyan;">l</span>
66
+ <span style="color: magenta;">i</span>
67
+ <span style="color: lime;">n</span>
68
+ <span style="color: teal;">g</span>
69
+ </div>
70
 
71
+ This is a Deepseek distill finetune trained on multilingual Chain-of-Thought (CoT).
72
+ When this model is prompted in a language, it will both think and respond in that language, unlike the original R1 which will often think in either Chinese or English.
73
+ This will make the outputs of these AIs more understandable and explainable to a wider audience.
74
+ Hopefully this will be useful to the AI community, particularly those developing for languages aside from English and Chinese.
75
 
76
+ This model is a multilingual fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B).
77
 
78
+ Other fine-tuned versions of this model can be found in [our collection, here](https://huggingface.co/collections/lightblue/r1-multilingual-679c890166ac0a84e83e38fa).
79
 
80
+ This model was trained was trained using our [lightblue/reasoning-multilingual-R1-Llama-70B-train](https://huggingface.co/datasets/lightblue/reasoning-multilingual-R1-Llama-70B-train) dataset for ~10 minutes on the 8 x L20 instance ([ecs.gn8is-8x.32xlarge](https://www.alibabacloud.com/help/en/ecs/user-guide/gpu-accelerated-compute-optimized-and-vgpu-accelerated-instance-families-1)) on [Alibaba Cloud](https://www.alibabacloud.com/).
81
 
82
+ # How to use
83
 
84
+ When using these models, we recommend using a sampling temperature of between 0.5-0.7, [as per the original distilled R1 models](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B#usage-recommendations).
85
 
86
+ Additionally, we have observed that the model sometimes tends to repeat for more niche languages, so we also recommend setting `repetition_penalty` to 1.1, or higher if the model repeats itself when processing your prompts.
87
 
88
+ We include scripts to use this model in vLLM:
89
 
90
+ <ul>
91
+ <li><b>vLLM</b>
92
 
93
+ Install [vLLM](https://github.com/vllm-project/vllm/) using `pip install vllm`.
 
 
 
 
 
 
 
 
 
 
 
 
94
 
95
+ <details open>
96
+ <summary>Show vLLM code</summary>
97
+
98
+ ```python
99
+ from vllm import LLM, SamplingParams
100
 
101
+ llm = LLM(
102
+ model="lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual",
103
+ max_model_len=8_000
104
+ )
 
 
 
 
 
 
 
105
 
106
+ sampling_params = SamplingParams(
107
+ temperature=0.5,
108
+ max_tokens=8_000
109
+ )
110
 
111
+ prompts = [
112
+ """学校には1クラスにつき20人の生徒がおり、クラスは合計3つあります。
113
+ 学校全体では男子と女子がそれぞれ50%ずついます。
114
+ 1つ目のクラスには女子が15人、2つ目のクラスには女子が12人います。
115
+ 3つ目のクラスには何人の男子がいますか?"""
116
+ ]
117
 
118
+ conversations = [
119
+ [{"role": "user", "content": x}] for x in prompts
120
+ ]
121
+
122
+ outputs = llm.chat(conversations, sampling_params=sampling_params)
123
+
124
+ for output in outputs:
125
+ print(output.outputs[0].text)
126
+
127
+ # <think>
128
+ # まず、学校の総生徒数を算出します。各クラスに20人の生徒があり、クラスは3つあるため、総生徒数は60人です。
129
+
130
+ # 次に、学校全体で男子と女子は同じ人数で分布しています。したがって、男子と女子各有30人。
131
+ ...
132
+ # したがって、3つ目のクラスの男子数は20 - 3 = 17人です。
133
+ # </think>
134
+
135
+ # **解答:**
136
+
137
+ # 学校の総生徒数を算出します。
138
+ ...
139
+ # **最終的な答え:**
140
+ # \[
141
+ # \boxed{17}
142
+ # \]
143
+ ```
144
+
145
+ </details></li>
146
+ </ul>
147
+
148
+ # Evaluation
149
+
150
+ Through some quick evaluation of our own, we found this model can produce much correctly formatted and accurate results for higher resource languages, such as Japanese, English, German, than lower resource languages, such as Amharic or Lao.
151
+
152
+ We did a **very** quick evaluation of 5 questions with each dataset (written by me and translated by GPT4o Mini) on the [lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual) model, and we find that the model is able to fairly reliably output the correct answers and in the correct language for a large variety of languages:
153
+
154
+ For this evaluation, a score of >=0.8 is good, as one of the questions was very hard. The language detection was done using [pycld2](https://pypi.org/project/pycld2/) so errors may occur with the correct language being mistaken for another one.
155
+
156
+ | language | Has a correct think statement | Has the think statement in the correct language | Is the response in the correct language | Is the answer correct |
157
+ |:----------------|------------:|------------------------:|----------------------:|-------------:|
158
+ | Amharic | 0.2 | 0 | 0 | 0 |
159
+ | Arabic | 1 | 0.8 | 0.8 | 0.6 |
160
+ | Bengali | 1 | 1 | 1 | 0.2 |
161
+ | Chinese | 1 | 1 | 1 | 0.8 |
162
+ | Czech | 1 | 1 | 1 | 0.8 |
163
+ | Dutch | 1 | 1 | 1 | 0.8 |
164
+ | English | 1 | 1 | 1 | 0.8 |
165
+ | French | 1 | 1 | 1 | 0.8 |
166
+ | German | 1 | 1 | 1 | 0.8 |
167
+ | Greek | 1 | 1 | 1 | 0.6 |
168
+ | Hausa | 0.4 | 0 | 0 | 0 |
169
+ | Hebrew | 1 | 0.8 | 1 | 0.6 |
170
+ | Hindi | 1 | 1 | 1 | 0.8 |
171
+ | Indonesian | 1 | 1 | 1 | 0.8 |
172
+ | Italian | 1 | 1 | 1 | 0.8 |
173
+ | Japanese | 1 | 1 | 0.8 | 0.6 |
174
+ | Javanese | 0.8 | 0.2 | 0.2 | 0.6 |
175
+ | Khmer | 0.6 | 0.6 | 0.6 | 0 |
176
+ | Korean | 1 | 1 | 1 | 1 |
177
+ | Lao | 0.4 | 0.4 | 0.4 | 0 |
178
+ | Malay | 1 | 0.4 | 0.4 | 0.8 |
179
+ | Marathi | 0.6 | 0.4 | 0.6 | 0.2 |
180
+ | Persian (Farsi) | 0.6 | None* | None* | 0.2 |
181
+ | Polish | 1 | 1 | 1 | 0.6 |
182
+ | Portuguese | 1 | 1 | 1 | 0.8 |
183
+ | Romanian | 1 | 1 | 1 | 0.8 |
184
+ | Russian | 1 | 1 | 1 | 0.8 |
185
+ | Spanish | 1 | 1 | 1 | 0.8 |
186
+ | Swahili | 0.4 | 0.4 | 0.4 | 0 |
187
+ | Swedish | 1 | 1 | 1 | 0.8 |
188
+ | Tagalog | 1 | 1 | 1 | 0.8 |
189
+ | Tamil | 0.8 | 0.8 | 0.8 | 0.2 |
190
+ | Telugu | 0.8 | 0.6 | 0.8 | 0 |
191
+ | Thai | 1 | 1 | 1 | 0.8 |
192
+ | Turkish | 1 | 1 | 1 | 0.8 |
193
+ | Ukrainian | 1 | 1 | 1 | 0.8 |
194
+ | Urdu | 1 | 1 | 1 | 0.6 |
195
+ | Vietnamese | 1 | 1 | 1 | 1 |
196
+
197
+ * There was an error with Farsi detection (my own fault) so we do not report Farsi scores.
198
+
199
+ The evaluation code for this can be found [here](https://drive.google.com/file/d/1P33GpqvKmHoZUsWqqBPXHTToN2W7MDRG/view?usp=sharing).
200
+
201
+ # Training code
202
+
203
+ ```yaml
204
+ ### model
205
+ model_name_or_path: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
206
+
207
+ ### method
208
+ stage: sft
209
+ do_train: true
210
+ finetuning_type: full
211
+ deepspeed: /root/LLaMA-Factory/examples/deepspeed/ds_z3_config.json
212
+
213
+ ### dataset
214
+ dataset: reasoning-multilingual-R1-Llama-70B-train
215
+ template: qwen
216
+ cutoff_len: 4096
217
+ overwrite_cache: true
218
+ preprocessing_num_workers: 16
219
+ packing: true
220
+
221
+ ### output
222
+ output_dir: /root/train_outputs/DeepSeek-R1-Distill-Qwen-14B/reasoning-multilingual-R1-Llama-70B-train
223
+ logging_steps: 1
224
+ save_steps: 0.99999
225
+ plot_loss: true
226
+ overwrite_output_dir: true
227
+
228
+ ### train
229
+ per_device_train_batch_size: 1
230
+ gradient_accumulation_steps: 1
231
+ learning_rate: 1.0e-5
232
+ num_train_epochs: 1.0
233
+ lr_scheduler_type: cosine
234
+ warmup_ratio: 0.01
235
+ bf16: true
236
+ ddp_timeout: 180000000
237
+
238
+ ### eval
239
+ val_size: 0.01
240
+ per_device_eval_batch_size: 1
241
+ eval_strategy: steps
242
+ eval_steps: 0.1
243
+ ```
244
+
245
+ ```bash
246
+ echo '{
247
+ "reasoning-multilingual-R1-Llama-70B-train": {
248
+ "hf_hub_url": "lightblue/reasoning-multilingual-R1-Llama-70B-train",
249
+ "formatting": "sharegpt"
250
+ }
251
+ }' > /root/LLaMA-Factory/data/dataset_info.json
252
+
253
+ # # 14B Llama
254
+ cd /root/LLaMA-Factory && llamafactory-cli train /root/reasoning_multilingual_train_14B.yaml
255
+ rm -r /root/train_outputs/DeepSeek-R1-Distill-Qwen-14B/reasoning-multilingual-R1-Llama-70B-train/checkpoint*
256
+ huggingface-cli upload lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual /root/train_outputs/DeepSeek-R1-Distill-Qwen-14B/reasoning-multilingual-R1-Llama-70B-train
257
+ ```
258
+
259
+ # License
260
+
261
+ We share this model with the Apache 2.0 license.
262
+
263
+ # Developed by
264
+
265
+ <a href="https://www.lightblue-tech.com">
266
+ <img src="https://www.lightblue-tech.com/wp-content/uploads/2023/08/color_%E6%A8%AA%E5%9E%8B-1536x469.png" alt="Lightblue technology logo" width="400"/>
267
+ </a>
268
+
269
+ This model was trained by Peter Devine ([ptrdvn](https://huggingface.co/ptrdvn)) for Lightblue