Safetensors
qwen2
reasoning
File size: 11,206 Bytes
465d606
db6f79d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
465d606
db6f79d
465d606
 
9c9c7b6
465d606
db6f79d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
465d606
db6f79d
 
 
 
465d606
db6f79d
465d606
db6f79d
465d606
db6f79d
465d606
db6f79d
465d606
db6f79d
465d606
db6f79d
465d606
db6f79d
465d606
db6f79d
 
465d606
db6f79d
465d606
db6f79d
 
 
 
 
465d606
db6f79d
 
 
 
465d606
db6f79d
 
 
 
465d606
db6f79d
 
 
 
 
 
465d606
db6f79d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a94b423
db6f79d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a94b423
db6f79d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a94b423
 
 
 
db6f79d
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
---
language:
- am
- ar
- bn
- zh
- cs
- nl
- en
- fr
- de
- el
- ha
- he
- hi
- id
- it
- ja
- jv
- km
- ko
- lo
- ms
- mr
- fa
- pl
- pt
- ro
- ru
- es
- sw
- sv
- tl
- ta
- te
- th
- tr
- uk
- ur
- vi
license: apache-2.0
datasets:
- lightblue/reasoning-multilingual-R1-Llama-70B-train
tags:
- reasoning
---

# lightblue/DeepSeek-R1-Distill-Qwen-1.5B-Multilingual

<div style="width: 100%; height: 160px; 
            display: flex; align-items: center; 
            justify-content: center; 
            border: 8px solid black; 
            font-size: 120px; font-weight: bold; 
            text-align: center; 
            color: #438db8,
            font-family: 'Helvetica Neue', sans-serif;">
  <span style="color: #438db8;">R1</span>
  &nbsp;
  <span style="color: blue;">m</span>
  <span style="color: green;">u</span>
  <span style="color: purple;">l</span>
  <span style="color: yellow;">t</span>
  <span style="color: pink;">i</span>
  <span style="color: cyan;">l</span>
  <span style="color: magenta;">i</span>
  <span style="color: lime;">n</span>
  <span style="color: teal;">g</span>
</div>

This is a Deepseek distill finetune trained on multilingual Chain-of-Thought (CoT).
When this model is prompted in a language, it will both think and respond in that language, unlike the original R1 which will often think in either Chinese or English.
This will make the outputs of these AIs more understandable and explainable to a wider audience.
Hopefully this will be useful to the AI community, particularly those developing for languages aside from English and Chinese.

This model is a multilingual fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).

Other fine-tuned versions of this model can be found in [our collection, here](https://huggingface.co/collections/lightblue/r1-multilingual-679c890166ac0a84e83e38fa).

This model was trained was trained for ~10 minutes on the 8 x L20 instance ([ecs.gn8is-8x.32xlarge](https://www.alibabacloud.com/help/en/ecs/user-guide/gpu-accelerated-compute-optimized-and-vgpu-accelerated-instance-families-1)) on [Alibaba Cloud](https://www.alibabacloud.com/).

# How to use

When using these models, we recommend using a sampling temperature of between 0.5-0.7, [as per the original distilled R1 models](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B#usage-recommendations).

Additionally, we have observed that the model sometimes tends to repeat for more niche languages, so we also recommend setting `repetition_penalty` to 1.1, or higher if the model repeats itself when processing your prompts.

We include scripts to use this model in vLLM:

<ul>
  <li><b>vLLM</b>

Install [vLLM](https://github.com/vllm-project/vllm/) using `pip install vllm`.

<details open>
  <summary>Show vLLM code</summary>
  
```python
from vllm import LLM, SamplingParams

llm = LLM(
    model="lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual",
    max_model_len=8_000
)

sampling_params = SamplingParams(
    temperature=0.5, 
    max_tokens=8_000
)

prompts = [
    """学校には1クラスにつき20人の生徒がおり、クラスは合計3つあります。
学校全体では男子と女子がそれぞれ50%ずついます。
1つ目のクラスには女子が15人、2つ目のクラスには女子が12人います。
3つ目のクラスには何人の男子がいますか?"""
]

conversations = [
    [{"role": "user", "content": x}] for x in prompts
]

outputs = llm.chat(conversations, sampling_params=sampling_params)

for output in outputs:
    print(output.outputs[0].text)

# <think>
# まず、学校の総生徒数を算出します。各クラスに20人の生徒があり、クラスは3つあるため、総生徒数は60人です。

# 次に、学校全体で男子と女子は同じ人数で分布しています。したがって、男子と女子各有30人。
...
# したがって、3つ目のクラスの男子数は20 - 3 = 17人です。
# </think>

# **解答:**

# 学校の総生徒数を算出します。
...
# **最終的な答え:**
# \[
# \boxed{17}
# \]
```

</details></li>
</ul>	

# Evaluation

Through some quick evaluation of our own, we found this model can produce much correctly formatted and accurate results for higher resource languages, such as Japanese, English, German, than lower resource languages, such as Amharic or Lao.

We did a **very** quick evaluation of 5 questions with each dataset (written by me and translated by GPT4o Mini) on the [lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual) model, and we find that the model is able to fairly reliably output the correct answers and in the correct language for a large variety of languages:

For this evaluation, a score of >=0.8 is good, as one of the questions was very hard. The language detection was done using [pycld2](https://pypi.org/project/pycld2/) so errors may occur with the correct language being mistaken for another one.

| language        |   Has a correct think statement |   Has the think statement in the correct language |   Is the response in the correct language |   Is the answer correct |
|:----------------|------------:|------------------------:|----------------------:|-------------:|
| Amharic         |         0.2 |                     0   |                   0   |          0   |
| Arabic          |         1   |                     0.8 |                   0.8 |          0.6 |
| Bengali         |         1   |                     1   |                   1   |          0.2 |
| Chinese         |         1   |                     1   |                   1   |          0.8 |
| Czech           |         1   |                     1   |                   1   |          0.8 |
| Dutch           |         1   |                     1   |                   1   |          0.8 |
| English         |         1   |                     1   |                   1   |          0.8 |
| French          |         1   |                     1   |                   1   |          0.8 |
| German          |         1   |                     1   |                   1   |          0.8 |
| Greek           |         1   |                     1   |                   1   |          0.6 |
| Hausa           |         0.4 |                     0   |                   0   |          0   |
| Hebrew          |         1   |                     0.8 |                   1   |          0.6 |
| Hindi           |         1   |                     1   |                   1   |          0.8 |
| Indonesian      |         1   |                     1   |                   1   |          0.8 |
| Italian         |         1   |                     1   |                   1   |          0.8 |
| Japanese        |         1   |                     1   |                   0.8 |          0.6 |
| Javanese        |         0.8 |                     0.2 |                   0.2 |          0.6 |
| Khmer           |         0.6 |                     0.6 |                   0.6 |          0   |
| Korean          |         1   |                     1   |                   1   |          1   |
| Lao             |         0.4 |                     0.4 |                   0.4 |          0   |
| Malay           |         1   |                     0.4 |                   0.4 |          0.8 |
| Marathi         |         0.6 |                     0.4 |                   0.6 |          0.2 |
| Persian (Farsi) |         0.6 |                     None*   |                   None*   |          0.2 |
| Polish          |         1   |                     1   |                   1   |          0.6 |
| Portuguese      |         1   |                     1   |                   1   |          0.8 |
| Romanian        |         1   |                     1   |                   1   |          0.8 |
| Russian         |         1   |                     1   |                   1   |          0.8 |
| Spanish         |         1   |                     1   |                   1   |          0.8 |
| Swahili         |         0.4 |                     0.4 |                   0.4 |          0   |
| Swedish         |         1   |                     1   |                   1   |          0.8 |
| Tagalog         |         1   |                     1   |                   1   |          0.8 |
| Tamil           |         0.8 |                     0.8 |                   0.8 |          0.2 |
| Telugu          |         0.8 |                     0.6 |                   0.8 |          0   |
| Thai            |         1   |                     1   |                   1   |          0.8 |
| Turkish         |         1   |                     1   |                   1   |          0.8 |
| Ukrainian       |         1   |                     1   |                   1   |          0.8 |
| Urdu            |         1   |                     1   |                   1   |          0.6 |
| Vietnamese      |         1   |                     1   |                   1   |          1   |

* There was an error with Farsi detection (my own fault) so we do not report Farsi scores.

The evaluation code for this can be found [here](https://drive.google.com/file/d/1P33GpqvKmHoZUsWqqBPXHTToN2W7MDRG/view?usp=sharing).

# Training code

```yaml
### model
model_name_or_path: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B

### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: /root/LLaMA-Factory/examples/deepspeed/ds_z2_config.json

### dataset
dataset: reasoning-multilingual-R1-Llama-70B-train
template: qwen
cutoff_len: 4500
overwrite_cache: true
preprocessing_num_workers: 16
packing: true

### output
output_dir: /root/train_outputs/DeepSeek-R1-Distill-Qwen-1.5B/reasoning-multilingual-R1-Llama-70B-train
logging_steps: 1
save_steps: 0.99999
plot_loss: true
overwrite_output_dir: true

### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 1.0
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
ddp_timeout: 180000000

### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 0.1
```

```bash
echo '{
  "reasoning-multilingual-R1-Llama-70B-train": {
    "hf_hub_url": "lightblue/reasoning-multilingual-R1-Llama-70B-train",
    "formatting": "sharegpt"
  }
}' > /root/LLaMA-Factory/data/dataset_info.json

# # 1.5B Llama
cd /root/LLaMA-Factory && llamafactory-cli train /root/reasoning_multilingual_train_1.5B.yaml
rm -r /root/train_outputs/DeepSeek-R1-Distill-Qwen-1.5B/reasoning-multilingual-R1-Llama-70B-train/checkpoint*
huggingface-cli upload lightblue/DeepSeek-R1-Distill-Qwen-1.5B-Multilingual /root/train_outputs/DeepSeek-R1-Distill-Qwen-1.5B/reasoning-multilingual-R1-Llama-70B-train
```

# License

We share this model with the Apache 2.0 license.

# Developed by

<a href="https://www.lightblue-tech.com">
<img src="https://www.lightblue-tech.com/wp-content/uploads/2023/08/color_%E6%A8%AA%E5%9E%8B-1536x469.png" alt="Lightblue technology logo" width="400"/>
</a>

This model was trained by Peter Devine ([ptrdvn](https://huggingface.co/ptrdvn)) for Lightblue