mobicham commited on
Commit
dc380b4
·
verified ·
1 Parent(s): a735a5d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ train: false
4
+ inference: true
5
+ pipeline_tag: text-generation
6
+ base_model:
7
+ - deepseek-ai/DeepSeek-R1-Distill-Llama3-8B
8
+ ---
9
+ This is a version of the <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama3-8B">DeepSeek-R1-Distill-Llama3-8B</a> model re-distilled for better performance.
10
+
11
+ ## Performance
12
+
13
+ | Models | <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama3-8B">DeepSeek-R1-Distill-Llama3-8B</a> | <a href="https://huggingface.co/mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1">DeepSeek-R1-ReDistill-Llama3-8B-v1.1</a> |
14
+ |:-------------------:|:--------:|:----------------:|
15
+ | ARC (25-shot) | 49.32 | <b>50</b> |
16
+ | HellaSwag (10-shot)| <b>76.75</b> | 76.2 |
17
+ | MMLU (5-shot) | 56.87 | <b>58.78</b> |
18
+ | TruthfulQA-MC2 | 50.53 | <b>51.94</b> |
19
+ | Winogrande (5-shot)| 68.11 | <b>70.25</b> |
20
+ | GSM8K (5-shot) | 61.79 | <b>75.66</b> |
21
+ | Average | 60.56 | <b>63.81</b> |
22
+
23
+ | Models | <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama3-8B">DeepSeek-R1-Distill-Llama3-8B</a> | <a href="https://huggingface.co/mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1">DeepSeek-R1-ReDistill-Llama3-8B-v1.1</a> |
24
+ |:-------------------:|:--------:|:----------------:|
25
+ | GPQA (0-shot) | 29 | <b>33.98</b> |
26
+ | MMLU PRO (5-shot) | 27.44 | <b>28.4</b> |
27
+ | MUSR (0-shot) | 38.29 | <b>41.82</b> |
28
+ | BBH (3-shot) | 41.57 | <b>49.59</b> |
29
+ | IfEval (0-shot) - strict | <b>42.81</b> | 39.09 |
30
+ | IfEval (0-shot) - loose | 31.05 | <b>40.29 </b> |
31
+
32
+ ## Usage
33
+ ```Python
34
+ import torch
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer
36
+ compute_dtype = torch.bfloat16
37
+ device = 'cuda'
38
+ model_id = "mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1"
39
+
40
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=compute_dtype, attn_implementation="sdpa", device_map=device)
41
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
42
+
43
+ prompt = "What is 1.5+102.2?"
44
+ chat = tokenizer.apply_chat_template([{"role":"user", "content":prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt")
45
+ outputs = model.generate(chat.to(device), max_new_tokens=1024, do_sample=True)
46
+ print(tokenizer.decode(outputs[0]))
47
+ ```
48
+
49
+ Output:
50
+ ```
51
+ <|begin▁of▁sentence|><|User|>What is 1.5+102.2?<|Assistant|><think>
52
+ To solve 1.5 plus 102.2, I'll start by adding the two numbers together.
53
+
54
+ First, I'll add the whole numbers: 1 plus 102 equals 103.
55
+
56
+ Then, I'll add the decimal parts: 0.5 plus 0.2 equals 0.7.
57
+
58
+ Finally, I'll combine the results: 103 plus 0.7 equals 103.7.
59
+
60
+ Therefore, 1.5 plus 102.2 is 103.7.
61
+ </think>
62
+
63
+ To find the sum of \(1.5\) and \(102.2\), follow these steps:
64
+
65
+ 1. **Align the decimal points:**
66
+
67
+ \[
68
+ \begin{array}{r}
69
+ 1.5 \\
70
+ +102.2 \\
71
+ \hline
72
+ \end{array}
73
+ \]
74
+
75
+ 2. **Add the numbers:**
76
+
77
+ - Add the whole numbers: \(1 + 102 = 103\)
78
+ - Add the decimal parts: \(0.5 + 0.2 = 0.7\)
79
+ - Combine the results: \(103 + 0.7 = 103.7\)
80
+
81
+ 3. **Final Answer:**
82
+
83
+ \[
84
+ \boxed{103.7}
85
+ \]<|end▁of▁sentence|>
86
+ ```
87
+
88
+ ## HQQ
89
+ Run ~3.5x faster with <a href="https://github.com/mobiusml/hqq/">HQQ</a>. First, install the dependencies:
90
+ ```
91
+ pip install hqq
92
+ ```
93
+
94
+ ```Python
95
+ import torch
96
+ from transformers import AutoModelForCausalLM, AutoTokenizer
97
+ from hqq.models.hf.base import AutoHQQHFModel
98
+ from hqq.core.quantize import *
99
+
100
+ #Params
101
+ device = 'cuda:0'
102
+ backend = "torchao_int4"
103
+ compute_dtype = torch.bfloat16 if backend=="torchao_int4" else torch.float16
104
+ model_id = "mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1"
105
+
106
+ #Load
107
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
108
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=compute_dtype, attn_implementation="sdpa")
109
+
110
+ #Quantize
111
+ quant_config = BaseQuantizeConfig(nbits=4, group_size=64, axis=1)
112
+ AutoHQQHFModel.quantize_model(model, quant_config=quant_config, compute_dtype=compute_dtype, device=device)
113
+
114
+ #Optimize
115
+ from hqq.utils.patching import prepare_for_inference
116
+ prepare_for_inference(model, backend=backend, verbose=False)
117
+
118
+ ############################################################
119
+ #Generate (streaming)
120
+ from hqq.utils.generation_hf import HFGenerator
121
+ gen = HFGenerator(model, tokenizer, max_new_tokens=8192, do_sample=True, compile='partial').warmup()
122
+
123
+ prompt = "If A equals B, and C equals B - A, what would be the value of C?"
124
+ out = gen.generate(prompt, print_tokens=True)
125
+
126
+ ############################################################
127
+ # #Generate (simple)
128
+ # from hqq.utils.generation_hf import patch_model_for_compiled_runtime
129
+ # patch_model_for_compiled_runtime(model, tokenizer, warmup=True)
130
+
131
+ # prompt = "If A equals B, and C equals B - A, what would be the value of C?"
132
+ # chat = tokenizer.apply_chat_template([{"role":"user", "content":prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt")
133
+ # outputs = model.generate(chat.to(device), max_new_tokens=8192, do_sample=True)
134
+ # print(tokenizer.decode(outputs[0]))
135
+ ```