Small Reasoning Model
Collection
12 items
•
Updated
•
4
When using the QwQen-3B-LCoT-R1 model, you might notice that it can sometimes produce repetitive outputs, especially in certain contexts or with specific prompts. This is a common behavior in language models, but don’t worry—it can be managed effectively by tweaking the model’s repetition parameters.
Think about the reasoning process in the mind first, then provide the answer.
The reasoning process should be wrapped within <think> </think> tags, then provide the answer after that, i.e., <think> reasoning process here </think> answer here.
The following YAML configuration was used to produce this model:
base_model: bunnycore/QwQen-3B-LCoT+bunnycore/Qwen-2.5-3b-R1-lora_model-v.1
dtype: bfloat16
merge_method: passthrough
models:
- model: bunnycore/QwQen-3B-LCoT+bunnycore/Qwen-2.5-3b-R1-lora_model-v.1
tokenizer_source: bunnycore/QwQen-3B-LCoT
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 25.97 |
IFEval (0-Shot) | 53.42 |
BBH (3-Shot) | 26.98 |
MATH Lvl 5 (4-Shot) | 33.53 |
GPQA (0-shot) | 1.57 |
MuSR (0-shot) | 10.03 |
MMLU-PRO (5-shot) | 30.26 |