Large-Scale Data Selection for Instruction Tuning
Abstract
Selecting high-quality training data from a larger pool is a crucial step when instruction-tuning language models, as carefully curated datasets often produce models that outperform those trained on much larger, noisier datasets. Automated data selection approaches for instruction-tuning are typically tested by selecting small datasets (roughly 10k samples) from small pools (100-200k samples). However, popular deployed instruction-tuned models often train on hundreds of thousands to millions of samples, subsampled from even larger data pools. We present a systematic study of how well data selection methods scale to these settings, selecting up to 2.5M samples from pools of up to 5.8M samples and evaluating across 7 diverse tasks. We show that many recently proposed methods fall short of random selection in this setting (while using more compute), and even decline in performance when given access to larger pools of data to select over. However, we find that a variant of representation-based data selection (RDS+), which uses weighted mean pooling of pretrained LM hidden states, consistently outperforms more complex methods across all settings tested -- all whilst being more compute-efficient. Our findings highlight that the scaling properties of proposed automated selection methods should be more closely examined. We release our code, data, and models at https://github.com/hamishivi/automated-instruction-selection.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- The Best Instruction-Tuning Data are Those That Fit (2025)
- CrowdSelect: Synthetic Instruction Data Selection with Multi-LLM Wisdom (2025)
- Diversity-driven Data Selection for Language Model Tuning through Sparse Autoencoder (2025)
- Improving Influence-based Instruction Tuning Data Selection for Balanced Learning of Diverse Capabilities (2025)
- Add-One-In: Incremental Sample Selection for Large Language Models via a Choice-Based Greedy Paradigm (2025)
- Efficient Response Generation Method Selection for Fine-Tuning Large Language Models (2025)
- Data Valuation using Neural Networks for Efficient Instruction Fine-Tuning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 5
Browse 5 models citing this paperDatasets citing this paper 15
Browse 15 datasets citing this paperSpaces citing this paper 0
No Space linking this paper