An Open Recipe: Adapting Language-Specific LLMs to a Reasoning Model in One Day via Model Merging
Abstract
This paper investigates data selection and model merging methodologies aimed at incorporating advanced reasoning capabilities such as those of DeepSeek R1 into language-specific large language models (LLMs), with a particular focus on the Thai LLM. Our goal is to enhance the reasoning capabilities of language-specific LLMs while maintaining their target language abilities. DeepSeek R1 excels in reasoning but primarily benefits high-resource languages such as English and Chinese. However, low-resource languages remain underserved due to the dominance of English-centric training data and model optimizations, which limit performance in these languages. This limitation results in unreliable code-switching and diminished effectiveness on tasks in low-resource languages. Meanwhile, local and regional LLM initiatives have attempted to bridge this gap by developing language-specific LLMs that focus on improving local linguistic fidelity. We demonstrate that, with only publicly available datasets and a computational budget of $120, it is possible to enhance the reasoning capabilities of language-specific LLMs to match the level of DeepSeek R1, without compromising their performance on target language tasks.
Community
This paper explore data selection and model merging to enhance language-specific LLMs (e.g., Thai) with DeepSeek R1-level reasoning. Using only public datasets and a $120 budget, we achieve this without compromising performance on language tasks.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LinguaLIFT: An Effective Two-stage Instruction Tuning Framework for Low-Resource Language Tasks (2024)
- BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large Language Models (2025)
- Understand, Solve and Translate: Bridging the Multilingual Mathematical Reasoning Gap (2025)
- AdaCoT: Rethinking Cross-Lingual Factual Reasoning through Adaptive Chain-of-Thought (2025)
- Multilingual Mathematical Reasoning: Advancing Open-Source LLMs in Hindi and English (2024)
- DeepThink: Aligning Language Models with Domain-Specific User Intents (2025)
- When LLMs Struggle: Reference-less Translation Evaluation for Low-resource Languages (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend