xuefengli commited on
Commit
11b3561
Β·
verified Β·
1 Parent(s): fa8f1ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -3
README.md CHANGED
@@ -1,3 +1,78 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ # LIMR: Less is More for RL Scaling
4
+
5
+ </div>
6
+
7
+
8
+ <p align="center">
9
+ πŸ“„ <a href="https://github.com/GAIR-NLP/LIMR/blob/master/limr.pdf" target="_blank">Paper</a> &nbsp; | &nbsp;
10
+ 🌐 <a href="https://huggingface.co/datasets/GAIR/LIMR" target="_blank">Dataset</a> &nbsp; | &nbsp;
11
+ πŸ“˜ <a href="https://huggingface.co/GAIR/LIMR" target="_blank">Model</a>
12
+ </p>
13
+
14
+
15
+
16
+ ## Releases
17
+
18
+ [2025/02/17] We're releasing the following components:
19
+
20
+ - πŸ› οΈ **LIM Tools**: Implementation of our **Learning Impact Measurement** methodology
21
+ - πŸš€ **Training & Evaluation**: Complete implementation of our training pipeline and evaluation scripts
22
+ - πŸ”₯ **[LIMR Dataset](https://huggingface.co/datasets/GAIR/LIMR)**: Our curated dataset of 1,389 mathematical questions
23
+ - πŸ€– **[LIMR Model](https://huggingface.co/GAIR/LIMR)**: Model training on the LIMR dataset.
24
+
25
+ ## Overview
26
+
27
+ This repository presents **LIMR**, an approach that challenges the assumption about data scaling in reinforcement learning for LLMs. We demonstrate that the quality and relevance of training samples matter far more than their quantity. Our **Learning Impact Measurement (LIM)** methodology enables automated evaluation of training sample effectiveness, eliminating the need for manual curation while achieving **comparable or superior** results with **6x less** data. Notably, all our investigations are conducted directly from base models without distillation, providing clear insights into the core dynamics of RL training.
28
+
29
+
30
+ Our key findings revolutionize the understanding of RL training dynamics:
31
+
32
+ - A strategically selected subset of training samples (1,389) can achieve comparable or even superior performance compared to training with the full dataset (8,523), fundamentally challenging the assumption that larger datasets necessarily lead to better performance.
33
+ - We introduce Learning Impact Measurement (LIM), an automated quantitative method for probing the potential value of RL training samples, enabling systematic analysis of how different samples contribute to model improvement.
34
+ - While distilled long-form reasoning data has shown efficiency in larger models, at the scale of ~1K samples with small models (7B), our data-efficient RL approach significantly outperforms SFT with distilled data.
35
+ - The path to better reasoning capabilities may not lie in simply scaling up RL training data, but rather in being more selective about which samples to use.
36
+
37
+
38
+ Performance across challenging mathematical benchmarks:
39
+
40
+ | Method | #Questions | AIME2024 | MATH500 | AMC2023 | AVG. |
41
+ |--------|------------|-----------|----------|-----------|-------|
42
+ | Qwen-Math-7B | - | 16.7 | 52.4 | 52.5 | 40.5 |
43
+ | Qwen-Math-7B-FULL | 8,523 | 32.5 | 76.6 | 61.9 | 57.0 |
44
+ | Qwen-Math-7B-RAND | 1,389 | 25.8 | 66.0 | 56.3 | 49.4 |
45
+ | Qwen-Math-7B-LINEAR | 1,138 | 28.3 | 74.6 | 61.9 | 54.9 |
46
+ | LIMR | 1,389 | **32.5** | **78.0** | **63.8** | **58.1** |
47
+
48
+ Comparsion with other popular RL recipes. We apply RL directly from the base model, without using distilled long chain-of-thought data from larger or stronger models, and only use 1k questions.
49
+ | Methods | Init Model | Long CoT Dis. | #Questions |
50
+ |-----------|------------|---------------|------------|
51
+ | STILL-3 | Instruct | Yes | 29,925 |
52
+ | DeepScaleR| Instruct | Yes | 40,314 |
53
+ | Sky-T1 | Instruct | Yes | 45,000 |
54
+ | THUDM-T1 | Instruct | No | 30,000 |
55
+ | PRIME | Instruct | No | 150,000 |
56
+ | SimpleRL | Base | No | 8,523 |
57
+ | LIMR | Base | No | 1,389 |
58
+
59
+ ## Acknowledgements
60
+
61
+ Our work builds upon the insightful technical reports from [DeepSeek R1](https://github.com/deepseek-ai/DeepSeek-R1) and [Kimi-k1.5](https://github.com/MoonshotAI/Kimi-k1.5) teams. We extend our appreciation to the [Qwen-Math](https://github.com/QwenLM/Qwen2.5-Math) team for their open-source model, and to the creators of [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF) and [vLLM](https://github.com/vllm-project/vllm) for providing the essential reinforcement learning framework and inference infrastructure, respectively, that enabled this research.
62
+
63
+ ## Citation
64
+
65
+ If you find this work useful, please cite our paper:
66
+
67
+ ```bibtex
68
+
69
+ @misc{limr2025,
70
+ author = {Li, Xuefeng and Zou, Haoyang and Liu, Pengfei},
71
+ title = {LIMR: Less is More for RL Scaling},
72
+ year = {2025},
73
+ publisher = {GitHub},
74
+ journal = {GitHub repository},
75
+ howpublished = {\url{https://github.com/GAIR-NLP/LIMR}},
76
+ }
77
+ ```
78
+