File size: 7,377 Bytes
12ac6ab cf11ee5 12ac6ab 18fa5a1 c8c31dc c3acf65 3ef28c8 821a177 d7b4ea6 f9638b7 d7b4ea6 f9638b7 d7b4ea6 f2ea462 d7b4ea6 f2ea462 821a177 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- LCoT
- Qwen
- v2
datasets:
- PowerInfer/QWQ-LONGCOT-500K
- AI-MO/NuminaMath-CoT
- prithivMLmods/Math-Solve
- amphora/QwQ-LongCoT-130K
- prithivMLmods/Deepthink-Reasoning
model-index:
- name: QwQ-LCoT2-7B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 55.76
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 22.21
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.38
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.75
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.13
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
---
# **QwQ-LCoT2-7B-Instruct**
The *QwQ-LCoT2-7B-Instruct* is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the Qwen2.5-7B base model and has been fine-tuned on the chain of thought reasoning datasets, focusing on chain-of-thought (CoT) reasoning for problems. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
# **Quickstart with Transformers**
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/QwQ-LCoT2-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r in strawberry."
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
# **Intended Use**
The QwQ-LCoT2-7B-Instruct model is designed for advanced reasoning and instruction-following tasks, with specific applications including:
1. **Instruction Following**: Providing detailed and step-by-step guidance for a wide range of user queries.
2. **Logical Reasoning**: Solving problems requiring multi-step thought processes, such as math problems or complex logic-based scenarios.
3. **Text Generation**: Crafting coherent, contextually relevant, and well-structured text in response to prompts.
4. **Problem-Solving**: Analyzing and addressing tasks that require chain-of-thought (CoT) reasoning, making it ideal for education, tutoring, and technical support.
5. **Knowledge Enhancement**: Leveraging reasoning datasets to offer deeper insights and explanations for a wide variety of topics.
# **Limitations**
1. **Data Bias**: As the model is fine-tuned on specific datasets, its outputs may reflect inherent biases from the training data.
2. **Context Limitation**: Performance may degrade for tasks requiring knowledge or reasoning that significantly exceeds the model's pretraining or fine-tuning context.
3. **Complexity Ceiling**: While optimized for multi-step reasoning, exceedingly complex or abstract problems may result in incomplete or incorrect outputs.
4. **Dependency on Prompt Quality**: The quality and specificity of the user prompt heavily influence the model's responses.
5. **Non-Factual Outputs**: Despite being fine-tuned for reasoning, the model can still generate hallucinated or factually inaccurate content, particularly for niche or unverified topics.
6. **Computational Requirements**: Running the model effectively requires significant computational resources, particularly when generating long sequences or handling high-concurrency workloads.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__QwQ-LCoT2-7B-Instruct-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FQwQ-LCoT2-7B-Instruct&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 28.60|
|IFEval (0-Shot) | 55.76|
|BBH (3-Shot) | 34.37|
|MATH Lvl 5 (4-Shot)| 22.21|
|GPQA (0-shot) | 6.38|
|MuSR (0-shot) | 15.75|
|MMLU-PRO (5-shot) | 37.13|
|