Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement
Abstract
The quality of Supervised Fine-Tuning (SFT) data plays a critical role in enhancing the conversational capabilities of Large Language Models (LLMs). However, as LLMs become more advanced, the availability of high-quality human-annotated SFT data has become a significant bottleneck, necessitating a greater reliance on synthetic training data. In this work, we introduce Condor, a novel two-stage synthetic data generation framework that incorporates World Knowledge Tree and Self-Reflection Refinement to produce high-quality SFT data at scale. Our experimental results demonstrate that a base model fine-tuned on only 20K Condor-generated samples achieves superior performance compared to counterparts. The additional refinement stage in Condor further enables iterative self-improvement for LLMs at various scales (up to 72B), validating the effectiveness of our approach. Furthermore, our investigation into the scaling for synthetic data in post-training reveals substantial unexplored potential for performance improvements, opening promising avenues for future research.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AIDE: Task-Specific Fine Tuning with Attribute Guided Multi-Hop Data Expansion (2024)
- RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response (2024)
- ALMA: Alignment with Minimal Annotation (2024)
- Surveying the Effects of Quality, Diversity, and Complexity in Synthetic Data From Large Language Models (2024)
- Teaching LLMs to Refine with Tools (2024)
- A Graph-Based Synthetic Data Pipeline for Scaling High-Quality Reasoning Instructions (2024)
- Language Models as Continuous Self-Evolving Data Engineers (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper