Upload model via Google Colab
Browse files- .gitattributes +8 -0
- README.md +78 -0
- deepseek-r1-redistill-qwen-1.5b-v1.1-Q3_K_M.gguf +3 -0
- deepseek-r1-redistill-qwen-1.5b-v1.1-Q4_0.gguf +3 -0
- deepseek-r1-redistill-qwen-1.5b-v1.1-Q4_K_M.gguf +3 -0
- deepseek-r1-redistill-qwen-1.5b-v1.1-Q5_K_M.gguf +3 -0
- deepseek-r1-redistill-qwen-1.5b-v1.1-Q6_K.gguf +3 -0
- deepseek-r1-redistill-qwen-1.5b-v1.1-Q8_0.gguf +3 -0
- deepseek-r1-redistill-qwen-1.5b-v1.1-fp16.gguf +3 -0
- imatrix.dat +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,11 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
deepseek-r1-redistill-qwen-1.5b-v1.1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
deepseek-r1-redistill-qwen-1.5b-v1.1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
deepseek-r1-redistill-qwen-1.5b-v1.1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
deepseek-r1-redistill-qwen-1.5b-v1.1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
deepseek-r1-redistill-qwen-1.5b-v1.1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
deepseek-r1-redistill-qwen-1.5b-v1.1-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
deepseek-r1-redistill-qwen-1.5b-v1.1-fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
train: false
|
4 |
+
inference: true
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
base_model:
|
7 |
+
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
8 |
+
---
|
9 |
+
This is a version of the <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B">DeepSeek-R1-Distill-Qwen-1.5B</a> model re-distilled for better performance.
|
10 |
+
|
11 |
+
## Performance
|
12 |
+
|
13 |
+
| Models | <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B">DeepSeek-R1-Distill-Qwen-1.5B</a> | <a href="https://huggingface.co/mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1">DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1</a> |
|
14 |
+
|:-------------------:|:--------:|:----------------:|
|
15 |
+
| ARC (25-shot) | 40.96 | <b>41.55</b> |
|
16 |
+
| HellaSwag (10-shot)| 44 | <b>45.88</b> |
|
17 |
+
| MMLU (5-shot) | 39.27 | <b>41.82</b> |
|
18 |
+
| TruthfulQA-MC2 | 45.17 | <b>46.63</b> |
|
19 |
+
| Winogrande (5-shot)| 55.49 | <b>57.7</b> |
|
20 |
+
| GSM8K (5-shot) | 69.9 | <b>74.3</b> |
|
21 |
+
| Average | 49.13 | <b>51.31</b> |
|
22 |
+
|
23 |
+
| Models | <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B">DeepSeek-R1-Distill-Qwen-1.5B</a> | <a href="https://huggingface.co/mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1">DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1</a> |
|
24 |
+
|:-------------------:|:--------:|:----------------:|
|
25 |
+
| GPQA (0-shot) | 26.96 | <b>26.99</b> |
|
26 |
+
| MMLU PRO (5-shot) | 16.74 | <b>19.86</b> |
|
27 |
+
| MUSR (0-shot) | 35.93 | <b>36.6</b> |
|
28 |
+
| BBH (3-shot) | 35.12 | <b>37.23</b> |
|
29 |
+
| IfEval (0-shot) | 24.94 | <b>27.22</b> |
|
30 |
+
|
31 |
+
## Usage
|
32 |
+
```Python
|
33 |
+
import torch
|
34 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
35 |
+
compute_dtype = torch.bfloat16
|
36 |
+
device = 'cuda'
|
37 |
+
model_id = "mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1"
|
38 |
+
|
39 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=compute_dtype, attn_implementation="sdpa", device_map=device)
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
41 |
+
|
42 |
+
prompt = "What is 1.5+102.2?"
|
43 |
+
chat = tokenizer.apply_chat_template([{"role":"user", "content":prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt")
|
44 |
+
outputs = model.generate(chat.to(device), max_new_tokens=1024, do_sample=True)
|
45 |
+
print(tokenizer.decode(outputs[0]))
|
46 |
+
```
|
47 |
+
|
48 |
+
Output:
|
49 |
+
```
|
50 |
+
<|begin▁of▁sentence|><|User|>What is 1.5+102.2?<|Assistant|><think>
|
51 |
+
First, I identify the numbers involved in the addition: 1.5 and 102.2.
|
52 |
+
|
53 |
+
Next, I add the whole numbers: 1 + 102 equals 103.
|
54 |
+
|
55 |
+
Then, I add the decimal parts: 0.5 + 0.2 equals 0.7.
|
56 |
+
|
57 |
+
Finally, I combine the results: 103 + 0.7 equals 103.7.
|
58 |
+
</think>
|
59 |
+
|
60 |
+
To solve the addition \(1.5 + 102.2\), follow these steps:
|
61 |
+
|
62 |
+
1. **Add the whole numbers:**
|
63 |
+
\[
|
64 |
+
1 + 102 = 103
|
65 |
+
\]
|
66 |
+
|
67 |
+
2. **Add the decimal parts:**
|
68 |
+
\[
|
69 |
+
0.5 + 0.2 = 0.7
|
70 |
+
\]
|
71 |
+
|
72 |
+
3. **Combine the results:**
|
73 |
+
\[
|
74 |
+
103 + 0.7 = 103.7
|
75 |
+
\]
|
76 |
+
|
77 |
+
So, the final answer is \(\boxed{103.7}\).<|end▁of▁sentence|>
|
78 |
+
```
|
deepseek-r1-redistill-qwen-1.5b-v1.1-Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4c91291af21b73c1ab5fb03274161b38929f4dc8950a7f245d8f4a01a14110e3
|
3 |
+
size 924705216
|
deepseek-r1-redistill-qwen-1.5b-v1.1-Q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a9c72bed0f652fc3a407fb1982524835c397c11c42abefff15ea4d9ddd43bb4b
|
3 |
+
size 1069083072
|
deepseek-r1-redistill-qwen-1.5b-v1.1-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:096bc00bad05fe8d15107ce041158a765da806e10b1d15d3b1264badc0a12c8b
|
3 |
+
size 1117596096
|
deepseek-r1-redistill-qwen-1.5b-v1.1-Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8a8ce38d9d5be186eebb7e2b4374ddcae8598e1d92d6b8cc2db3ee8545690e3f
|
3 |
+
size 1285794240
|
deepseek-r1-redistill-qwen-1.5b-v1.1-Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d8e0de3b8733a44fc34231044ae986b85ab80988002a0526f744a0c18b2ba60c
|
3 |
+
size 1464504768
|
deepseek-r1-redistill-qwen-1.5b-v1.1-Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3fc16c930bf62663d6a18b829a630223e4a18c7922c829cbd0647c3e427e5266
|
3 |
+
size 1894953408
|
deepseek-r1-redistill-qwen-1.5b-v1.1-fp16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:71184082a0483beb5bbc39a42ca23a8ef8ec61f5a1f9cc4dc9b1dfec45df12bc
|
3 |
+
size 3561205920
|
imatrix.dat
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d71196a869d7cae18ab3f81e6df910695ec88bcfab6f9d8a6b5f2f94b3700e0c
|
3 |
+
size 2042233
|