Safetensors
qwen2
Wanfq commited on
Commit
bee7f04
·
verified ·
1 Parent(s): b7932ce

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +174 -0
README.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center" width="100%">
2
+ </p>
3
+
4
+ <div id="top" align="center">
5
+
6
+ FuseO1-Preview: System-II Reasoning Fusion of LLMs
7
+ -----------------------------
8
+
9
+ <h4> |<a href="https://arxiv.org/abs/2408.07990"> 📑 Paper </a> |
10
+ <a href="https://github.com/fanqiwan/FuseAI"> 🐱 GitHub Repo </a> |
11
+ <a href="https://huggingface.co/FuseAI"> 🤗 Hugging Face </a> |
12
+ </h4>
13
+
14
+ <!-- **Authors:** -->
15
+
16
+ _Fanqi Wan, Longguang Zhong, Ziyi Yang_
17
+
18
+
19
+ <!-- **Affiliations:** -->
20
+
21
+ _Sun Yat-sen University_
22
+
23
+ </div>
24
+
25
+ ## Overview
26
+
27
+ [FuseO1-Preview](https://huggingface.co/collections/FuseAI/fuseo1-preview-678eb56093649b2688bc9977) is our initial endeavor to enhance the System-II reasoning capabilities of large language models (LLMs) through innovative model fusion techniques. By employing advanced [SCE](https://arxiv.org/abs/2408.07990) merging methodologies, we integrate multiple open-source o1-like LLMs into a unified model. Our goal is to incorporate the distinct knowledge and strengths from different reasoning LLMs into a single, unified model with strong System-II reasoning abilities, particularly in mathematics, coding, and science domains.
28
+
29
+ To achieve this, we conduct two types of model merging:
30
+
31
+ - **Long-Long Reasoning Merging**: This approach involves model fusion across LLMs that utilize long-CoT reasoning, with the goal of enhancing long-CoT reasoning capabilities. The resulted [FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview) achieves an accuracy of **60.00 on AIME24**, demonstrating significant performance improvements compared to the o1-preview model (44.60) and approaching the performance of the o1-mini model (63.60).
32
+ - **Long-Short Reasoning Merging**: This approach involves model fusion between long-CoT and short-CoT LLMs, aiming to improve reasoning capabilities in both long and short reasoning processes. The resulted [FuseAI/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview) is capable of utilizing both long and short reasoning processes and demonstrates relatively strong performance in long reasoning tasks.
33
+
34
+ | Model | Merge Type | Source Models | HF Link |
35
+ |:----- | ---- | ---- | ---- |
36
+ | FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview | Long-Long Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview), [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview) |
37
+ | FuseAI/FuseO1-DeekSeekR1-QwQ-32B-Preview | Long-Long Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-QwQ-32B-Preview) |
38
+ | FuseAI/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview) |
39
+
40
+
41
+ ## Long-Long Reasoning Merging
42
+
43
+ We conduct experiments on these folloing long-cot LLMs.
44
+
45
+ - [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
46
+ - [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)
47
+ - [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview)
48
+
49
+ To reproduce the merged [FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview) model, using the script below.
50
+
51
+ ```sh
52
+ cd FuseAI/FuseO1-Preview/mergekit
53
+ pip3 install -e .
54
+ model_save_dir=xx # your path to save the merged models
55
+ mergekit-yaml fuseo1_configs/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview.yaml ${model_save_dir}/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview
56
+ ```
57
+
58
+ To reproduce the merged [FuseAI/FuseO1-DeekSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-QwQ-32B-Preview) model, using the script below.
59
+
60
+ ```sh
61
+ cd FuseAI/FuseO1-Preview/mergekit
62
+ pip3 install -e .
63
+ model_save_dir=xxx # your path to save the merged models
64
+ mergekit-yaml fuseo1_configs/FuseO1-DeekSeekR1-QwQ-32B-Preview.yaml ${model_save_dir}/FuseO1-DeekSeekR1-QwQ-32B-Preview
65
+ ```
66
+
67
+ We provide the example code to use FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview.
68
+
69
+ ```python3
70
+ from vllm import LLM, SamplingParams
71
+
72
+ llm = LLM(model="FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview", tensor_parallel_size=8)
73
+ sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|end▁of▁sentence|>", "<|User|>"], stop_token_ids=[151643, 151644])
74
+
75
+ conversations = [
76
+ [
77
+ {"role": "system", "content": "You are a helpful and harmless assistant. You should think step-by-step."},
78
+ {"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."},
79
+ ],
80
+ ]
81
+
82
+ responses = llm.chat(messages=conversations, sampling_params=sampling_params, use_tqdm=True)
83
+
84
+ for response in responses:
85
+ print(response.outputs[0].text.strip())
86
+ ```
87
+
88
+ ## Long-Short Reasoning Merging
89
+
90
+ We conduct experiments on these folloing long-cot and short-cot LLMs.
91
+
92
+ - [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
93
+ - [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
94
+
95
+ To reproduce the merged [FuseAI/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview) model, using the script below.
96
+
97
+ ```sh
98
+ cd FuseAI/FuseO1-Preview/mergekit
99
+ pip3 install -e .
100
+ model_save_dir=xxx # your path to save the merged models
101
+ mergekit-yaml fuseo1_configs/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview.yaml ${model_save_dir}/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview
102
+ ```
103
+
104
+ We provide the code to use FuseAI/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview.
105
+
106
+ ```python3
107
+ from vllm import LLM, SamplingParams
108
+
109
+ llm = LLM(model="FuseAI/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview", tensor_parallel_size=8)
110
+ sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|end▁of▁sentence|>", "<|User|>"], stop_token_ids=[151643, 151644])
111
+
112
+ conversations = [
113
+ [
114
+ {"role": "system", "content": "You are a helpful and harmless assistant. You should think step-by-step."},
115
+ {"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."},
116
+ ],
117
+ ]
118
+
119
+ responses = llm.chat(messages=conversations, sampling_params=sampling_params, use_tqdm=True)
120
+
121
+ for response in responses:
122
+ print(response.outputs[0].text.strip())
123
+ ```
124
+
125
+ ## Evaluation Results
126
+
127
+ We test the resulted models on three kinds of benchmarks, including **Math Reasoning**, **Code Reasoning** , and **Scientific Reasoning**.
128
+
129
+ Math Reasoning
130
+ - AIME24
131
+ - MATH500
132
+ - GSM8K
133
+
134
+ Scientific Reasoning
135
+ - GPQA-Diamond
136
+ - ARC-Challenge
137
+ - MMLU-Pro
138
+ - MMLU
139
+
140
+
141
+ Code Reasoning
142
+ - LiveCodeBench
143
+
144
+ The [evaluation code](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/evaluation) is modified from [SkyThought](https://github.com/NovaSky-AI/SkyThought). In our evaluation, we set the temperature to 0.7 (sampling) and the max_tokens to 32768.
145
+
146
+ The evaluation results are shown in the table below:
147
+
148
+ | Models | AIME24 | MATH500 | GSM8K | GPQA-Diamond | ARC-Challenge | MMLU-Pro | MMLU | LiveCodeBench |
149
+ |:-| ------ | ------- | ----- | ------------ | ------------- | -------- | ---- | ------------- |
150
+ | o1-preview | 44.60 | 85.50 | - | 73.30 | - | - | 90.80 | - |
151
+ | o1-mini | 63.60 | 90.00 | - | 60.00 | - | 80.30 | 85.20| 53.80 |
152
+ | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 46.67 | 88.20 | - | 57.58 | - | - | - | - |
153
+ | [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) |43.33 | 87.80 | 95.45 | 49.49 | 95.73 | 63.49 | 85.19 | 51.86 |
154
+ | [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 43.33 | 86.80 | 95.15 | 50.51 | 95.56 | 65.80 | 82.71 | 51.66 |
155
+ | [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | 20.00 | 81.60 | 93.63 | 46.46 | 95.22 | 56.27 | 79.63 | 48.53 |
156
+ | [FuseAI/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-Qwen2.5-Instruct-32B-Preview) | 46.67 | 87.20 | - | 55.05 | - | - | - | - |
157
+ | [FuseAI/FuseO1-DeekSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-QwQ-32B-Preview) | 56.67 | 85.60 | - | 62.12 | - | - | - | - |
158
+ | [FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview) | 60.00 | 90.00 | - | 62.12 | - | - | - | - |
159
+
160
+ ## Future Works
161
+
162
+ This work is our first attempt effort to achieve knowledge fusion of System-II reasoning LLMs through a model merging approach, which is limited to LLMs with identical scale and architecture. In future work, we plan to employ our [explicit model fusion](https://arxiv.org/abs/2401.10491) method, based on multi-teacher knowledge distillation, and our [implici model fusion](https://arxiv.org/abs/2412.03187) method, which utilizes weighted-reward preference optimization for LLMs with different scales and architectures.
163
+ Furthermore, we intend to explore the combination of knowledge fusion with reinforcement learning (RL) methods, which have been demonstrated as the most effective approach for enhancing reasoning abilities. Stay tuned for the next version of FuseO1!
164
+
165
+ ## Citations
166
+
167
+ ```
168
+ @article{wan2024fusechat,
169
+ title={Fusechat: Knowledge fusion of chat models},
170
+ author={Wan, Fanqi and Zhong, Longguang and Yang, Ziyi and Chen, Ruijun and Quan, Xiaojun},
171
+ journal={arXiv preprint arXiv:2408.07990},
172
+ year={2024}
173
+ }
174
+ ```