File size: 1,928 Bytes
feab595 50e3d17 109d6dc 0b354bd 6aa8006 272f56f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/QwQ-32B-Preview
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
![7.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/xOHhPF_5HZcxDzxxeLv6x.png)
# **Blaze.1-32B-Instruct**
Blaze.1-32B-Instruct is based on the QwQ-32B-Preview model, fine-tuned using synthetic data for mathematical reasoning and conditional reasoning to handle complex reasoning problems. The model may unexpectedly mix languages or switch between them, affecting response clarity. Additionally, it may enter recursive reasoning loops, resulting in lengthy responses without a conclusive answer, as it focuses on maintaining a continuous chain of thought reasoning.
# **Quickstart Chat Template**
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Blaze.1-32B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r in strawberry."
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
|