prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,3 +14,42 @@ tags:
|
|
14 |
# **Blaze.1-32B-Instruct**
|
15 |
Blaze.1-32B-Instruct is based on the QwQ-32B-Preview model, fine-tuned using synthetic data for mathematical reasoning and conditional reasoning to handle complex reasoning problems. The model may unexpectedly mix languages or switch between them, affecting response clarity. Additionally, it may enter recursive reasoning loops, resulting in lengthy responses without a conclusive answer, as it focuses on maintaining a continuous chain of thought reasoning.
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
# **Blaze.1-32B-Instruct**
|
15 |
Blaze.1-32B-Instruct is based on the QwQ-32B-Preview model, fine-tuned using synthetic data for mathematical reasoning and conditional reasoning to handle complex reasoning problems. The model may unexpectedly mix languages or switch between them, affecting response clarity. Additionally, it may enter recursive reasoning loops, resulting in lengthy responses without a conclusive answer, as it focuses on maintaining a continuous chain of thought reasoning.
|
16 |
|
17 |
+
# **Quickstart Chat Template**
|
18 |
+
|
19 |
+
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
|
20 |
+
|
21 |
+
```python
|
22 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
23 |
+
|
24 |
+
model_name = "prithivMLmods/Blaze.1-32B-Instruct"
|
25 |
+
|
26 |
+
model = AutoModelForCausalLM.from_pretrained(
|
27 |
+
model_name,
|
28 |
+
torch_dtype="auto",
|
29 |
+
device_map="auto"
|
30 |
+
)
|
31 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
32 |
+
|
33 |
+
prompt = "How many r in strawberry."
|
34 |
+
messages = [
|
35 |
+
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
|
36 |
+
{"role": "user", "content": prompt}
|
37 |
+
]
|
38 |
+
text = tokenizer.apply_chat_template(
|
39 |
+
messages,
|
40 |
+
tokenize=False,
|
41 |
+
add_generation_prompt=True
|
42 |
+
)
|
43 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
44 |
+
|
45 |
+
generated_ids = model.generate(
|
46 |
+
**model_inputs,
|
47 |
+
max_new_tokens=512
|
48 |
+
)
|
49 |
+
generated_ids = [
|
50 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
51 |
+
]
|
52 |
+
|
53 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
54 |
+
```
|
55 |
+
|