prithivMLmods commited on
Commit
18fa5a1
·
verified ·
1 Parent(s): f2ea462

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -9,6 +9,7 @@ library_name: transformers
9
  tags:
10
  - LCoT
11
  - Qwen
 
12
  datasets:
13
  - PowerInfer/QWQ-LONGCOT-500K
14
  - AI-MO/NuminaMath-CoT
@@ -79,4 +80,4 @@ The QwQ-LCoT2-7B-Instruct model is designed for advanced reasoning and instructi
79
  3. **Complexity Ceiling**: While optimized for multi-step reasoning, exceedingly complex or abstract problems may result in incomplete or incorrect outputs.
80
  4. **Dependency on Prompt Quality**: The quality and specificity of the user prompt heavily influence the model's responses.
81
  5. **Non-Factual Outputs**: Despite being fine-tuned for reasoning, the model can still generate hallucinated or factually inaccurate content, particularly for niche or unverified topics.
82
- 6. **Computational Requirements**: Running the model effectively requires significant computational resources, particularly when generating long sequences or handling high-concurrency workloads.
 
9
  tags:
10
  - LCoT
11
  - Qwen
12
+ - v2
13
  datasets:
14
  - PowerInfer/QWQ-LONGCOT-500K
15
  - AI-MO/NuminaMath-CoT
 
80
  3. **Complexity Ceiling**: While optimized for multi-step reasoning, exceedingly complex or abstract problems may result in incomplete or incorrect outputs.
81
  4. **Dependency on Prompt Quality**: The quality and specificity of the user prompt heavily influence the model's responses.
82
  5. **Non-Factual Outputs**: Despite being fine-tuned for reasoning, the model can still generate hallucinated or factually inaccurate content, particularly for niche or unverified topics.
83
+ 6. **Computational Requirements**: Running the model effectively requires significant computational resources, particularly when generating long sequences or handling high-concurrency workloads.