Update README.md
Browse files
README.md
CHANGED
@@ -14,12 +14,12 @@ FuseO1-Preview: System-II Reasoning Fusion of LLMs
|
|
14 |
|
15 |
<!-- **Authors:** -->
|
16 |
|
17 |
-
_Fanqi Wan, Longguang Zhong, Ziyi
|
18 |
|
19 |
|
20 |
<!-- **Affiliations:** -->
|
21 |
|
22 |
-
|
23 |
|
24 |
</div>
|
25 |
|
@@ -30,18 +30,23 @@ _Sun Yat-sen University_
|
|
30 |
|
31 |
## Overview
|
32 |
|
33 |
-
[FuseO1-Preview](https://huggingface.co/collections/FuseAI/fuseo1-preview-678eb56093649b2688bc9977) is our initial endeavor to enhance the System-II reasoning capabilities of large language models (LLMs) through innovative model fusion techniques. By employing advanced [SCE](https://arxiv.org/abs/2408.07990) merging methodologies, we integrate multiple open-source o1-like LLMs into a unified model. Our goal is to incorporate the distinct knowledge and strengths from different reasoning LLMs into a single, unified model with strong System-II reasoning abilities, particularly in mathematics, coding, and science domains.
|
|
|
|
|
|
|
|
|
34 |
|
35 |
To achieve this, we conduct two types of model merging:
|
36 |
|
37 |
-
- **Long-Long Reasoning Merging**: This approach involves model fusion across LLMs that utilize long-CoT reasoning, with the goal of enhancing long-CoT reasoning capabilities. The resulted [FuseAI/FuseO1-
|
38 |
-
- **Long-Short Reasoning Merging**: This approach involves model fusion between long-CoT and short-CoT LLMs, aiming to improve reasoning capabilities in both long and short reasoning processes. The resulted [FuseAI/FuseO1-
|
39 |
|
40 |
| Model | Merge Type | Source Models | HF Link |
|
41 |
|:----- | ---- | ---- | ---- |
|
42 |
-
| FuseAI/FuseO1-
|
43 |
-
| FuseAI/FuseO1-
|
44 |
-
| FuseAI/FuseO1-
|
|
|
45 |
|
46 |
|
47 |
## Long-Long Reasoning Merging
|
@@ -52,35 +57,35 @@ We conduct experiments on these folloing long-cot LLMs.
|
|
52 |
- [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)
|
53 |
- [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview)
|
54 |
|
55 |
-
To reproduce the merged [FuseAI/FuseO1-
|
56 |
|
57 |
```sh
|
58 |
cd FuseAI/FuseO1-Preview/mergekit
|
59 |
pip3 install -e .
|
60 |
model_save_dir=xx # your path to save the merged models
|
61 |
-
mergekit-yaml fuseo1_configs/FuseO1-
|
62 |
```
|
63 |
|
64 |
-
To reproduce the merged [FuseAI/FuseO1-
|
65 |
|
66 |
```sh
|
67 |
cd FuseAI/FuseO1-Preview/mergekit
|
68 |
pip3 install -e .
|
69 |
model_save_dir=xxx # your path to save the merged models
|
70 |
-
mergekit-yaml fuseo1_configs/FuseO1-
|
71 |
```
|
72 |
|
73 |
-
We provide the example code to use FuseAI/FuseO1-
|
74 |
|
75 |
```python3
|
76 |
from vllm import LLM, SamplingParams
|
77 |
|
78 |
-
llm = LLM(model="FuseAI/FuseO1-
|
79 |
-
sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|end▁of▁sentence|>"
|
80 |
|
81 |
conversations = [
|
82 |
[
|
83 |
-
{"role": "system", "content": "
|
84 |
{"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."},
|
85 |
],
|
86 |
]
|
@@ -97,27 +102,37 @@ We conduct experiments on these folloing long-cot and short-cot LLMs.
|
|
97 |
|
98 |
- [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
|
99 |
- [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
|
101 |
-
To reproduce the merged [FuseAI/FuseO1-
|
102 |
|
103 |
```sh
|
104 |
cd FuseAI/FuseO1-Preview/mergekit
|
105 |
pip3 install -e .
|
106 |
model_save_dir=xxx # your path to save the merged models
|
107 |
-
mergekit-yaml fuseo1_configs/FuseO1-
|
108 |
```
|
109 |
|
110 |
-
We provide the code to use FuseAI/FuseO1-
|
111 |
|
112 |
```python3
|
113 |
from vllm import LLM, SamplingParams
|
114 |
|
115 |
-
llm = LLM(model="FuseAI/FuseO1-
|
116 |
-
sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|end▁of▁sentence|>"
|
117 |
|
118 |
conversations = [
|
119 |
[
|
120 |
-
{"role": "system", "content": "
|
121 |
{"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."},
|
122 |
],
|
123 |
]
|
@@ -135,17 +150,50 @@ We test the resulted models on three kinds of benchmarks, including **Math Reaso
|
|
135 |
Math Reasoning
|
136 |
- AIME24
|
137 |
- MATH500
|
138 |
-
-
|
139 |
|
140 |
Scientific Reasoning
|
141 |
- GPQA-Diamond
|
142 |
-
- ARC-Challenge
|
143 |
- MMLU-Pro
|
144 |
- MMLU
|
145 |
|
146 |
|
147 |
Code Reasoning
|
148 |
-
- LiveCodeBench
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
149 |
|
150 |
The evaluation code is modified from [SkyThought](https://github.com/NovaSky-AI/SkyThought). In our evaluation, we set the temperature to 0.7 and the max_tokens to 32768. We provide the example to reproduce our results in [evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/evaluation).
|
151 |
|
@@ -155,21 +203,50 @@ The system prompt for evaluation is set to:
|
|
155 |
You are a helpful and harmless assistant. You should think step-by-step.
|
156 |
```
|
157 |
|
158 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
159 |
|
160 |
The evaluation results are shown in the table below:
|
161 |
|
162 |
-
| Models
|
163 |
-
|
164 |
-
| o1
|
165 |
-
| o1-
|
166 |
-
|
|
167 |
-
|
|
168 |
-
| [
|
169 |
-
| [Qwen/
|
170 |
-
| [FuseAI/FuseO1-
|
171 |
-
|
172 |
-
|
173 |
|
174 |
## Future Works
|
175 |
|
|
|
14 |
|
15 |
<!-- **Authors:** -->
|
16 |
|
17 |
+
_Fanqi Wan, Longguang Zhong, Ziyi Yang, Weizhou Shen, Xinting Huang_
|
18 |
|
19 |
|
20 |
<!-- **Affiliations:** -->
|
21 |
|
22 |
+
_FuseAI Team_
|
23 |
|
24 |
</div>
|
25 |
|
|
|
30 |
|
31 |
## Overview
|
32 |
|
33 |
+
[FuseO1-Preview](https://huggingface.co/collections/FuseAI/fuseo1-preview-678eb56093649b2688bc9977) is our initial endeavor to enhance the System-II reasoning capabilities of large language models (LLMs) through innovative model fusion techniques. By employing our advanced [SCE](https://arxiv.org/abs/2408.07990) merging methodologies, we integrate multiple open-source o1-like LLMs into a unified model. Our goal is to incorporate the distinct knowledge and strengths from different reasoning LLMs into a single, unified model with strong System-II reasoning abilities, particularly in mathematics, coding, and science domains.
|
34 |
+
|
35 |
+
<p align="center">
|
36 |
+
<img src="./assets/sce.jpg" width="70%"> <br>
|
37 |
+
</p>
|
38 |
|
39 |
To achieve this, we conduct two types of model merging:
|
40 |
|
41 |
+
- **Long-Long Reasoning Merging**: This approach involves model fusion across LLMs that utilize long-CoT reasoning, with the goal of enhancing long-CoT reasoning capabilities. The resulted [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) achieves a Pass@1 accuracy of **74.0 on AIME24**, demonstrating significant performance improvements compared to the OpenAI o1-preview (44.6) and OpenAI o1-mini (63.4), even approaching OpenAI o1 (79.2).
|
42 |
+
- **Long-Short Reasoning Merging**: This approach involves model fusion between long-CoT and short-CoT LLMs, aiming to improve reasoning capabilities in both long and short reasoning processes. The resulted [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) is capable of utilizing both long and short reasoning processes and demonstrates relatively strong performance in long reasoning tasks.
|
43 |
|
44 |
| Model | Merge Type | Source Models | HF Link |
|
45 |
|:----- | ---- | ---- | ---- |
|
46 |
+
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | Long-Long Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview), [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview), [GGUF](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview-GGUF) |
|
47 |
+
| [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | Long-Long Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) |
|
48 |
+
| [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) |
|
49 |
+
| [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/Qwen2.5-32B-Coder](https://huggingface.co/Qwen/Qwen2.5-32B-Coder) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) |
|
50 |
|
51 |
|
52 |
## Long-Long Reasoning Merging
|
|
|
57 |
- [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)
|
58 |
- [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview)
|
59 |
|
60 |
+
To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) model, using the script below.
|
61 |
|
62 |
```sh
|
63 |
cd FuseAI/FuseO1-Preview/mergekit
|
64 |
pip3 install -e .
|
65 |
model_save_dir=xx # your path to save the merged models
|
66 |
+
mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview --cudas
|
67 |
```
|
68 |
|
69 |
+
To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) model, using the script below.
|
70 |
|
71 |
```sh
|
72 |
cd FuseAI/FuseO1-Preview/mergekit
|
73 |
pip3 install -e .
|
74 |
model_save_dir=xxx # your path to save the merged models
|
75 |
+
mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-32B-Preview --cuda
|
76 |
```
|
77 |
|
78 |
+
We provide the example code to use FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview.
|
79 |
|
80 |
```python3
|
81 |
from vllm import LLM, SamplingParams
|
82 |
|
83 |
+
llm = LLM(model="FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview", tensor_parallel_size=8)
|
84 |
+
sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|im_end|>", "<|end▁of▁sentence|>"], stop_token_ids=[151645, 151643])
|
85 |
|
86 |
conversations = [
|
87 |
[
|
88 |
+
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{{}}."},
|
89 |
{"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."},
|
90 |
],
|
91 |
]
|
|
|
102 |
|
103 |
- [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
|
104 |
- [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
|
105 |
+
- [Qwen/Qwen2.5-32B-Coder](https://huggingface.co/Qwen/Qwen2.5-32B-Coder)
|
106 |
+
|
107 |
+
To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) model, using the script below.
|
108 |
+
|
109 |
+
```sh
|
110 |
+
cd FuseAI/FuseO1-Preview/mergekit
|
111 |
+
pip3 install -e .
|
112 |
+
model_save_dir=xxx # your path to save the merged models
|
113 |
+
mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview --cuda
|
114 |
+
```
|
115 |
|
116 |
+
To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) model, using the script below.
|
117 |
|
118 |
```sh
|
119 |
cd FuseAI/FuseO1-Preview/mergekit
|
120 |
pip3 install -e .
|
121 |
model_save_dir=xxx # your path to save the merged models
|
122 |
+
mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview --cuda
|
123 |
```
|
124 |
|
125 |
+
We provide the code to use FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview.
|
126 |
|
127 |
```python3
|
128 |
from vllm import LLM, SamplingParams
|
129 |
|
130 |
+
llm = LLM(model="FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview", tensor_parallel_size=8)
|
131 |
+
sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|im_end|>", "<|end▁of▁sentence|>"], stop_token_ids=[151645, 151643])
|
132 |
|
133 |
conversations = [
|
134 |
[
|
135 |
+
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{{}}."},
|
136 |
{"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."},
|
137 |
],
|
138 |
]
|
|
|
150 |
Math Reasoning
|
151 |
- AIME24
|
152 |
- MATH500
|
153 |
+
- OlympiadBench
|
154 |
|
155 |
Scientific Reasoning
|
156 |
- GPQA-Diamond
|
|
|
157 |
- MMLU-Pro
|
158 |
- MMLU
|
159 |
|
160 |
|
161 |
Code Reasoning
|
162 |
+
- LiveCodeBench (2408-2502)
|
163 |
+
|
164 |
+
> Important Note: We manully set `"add_bos_token": false` in `tokenizer_config.json` for all the evaluated LLMs to prevent the bos_token to be added twice for each prompt. Please download and modify to ensure consistency.
|
165 |
+
|
166 |
+
### Math Reasoning
|
167 |
+
|
168 |
+
The evaluation code is modified from [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math). In our evaluation, we set the temperature to 0.6, the top-p to 0.95 and the max_tokens to 32768. We provide the example to reproduce our results in [math_evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/math_evaluation).
|
169 |
+
|
170 |
+
The system prompt for evaluation is set to:
|
171 |
+
|
172 |
+
```sh
|
173 |
+
Please reason step by step, and put your final answer within \\boxed{{}}.
|
174 |
+
```
|
175 |
+
|
176 |
+
The evaluation results are shown in the table below:
|
177 |
+
|
178 |
+
In our evaluation of AIME24, we follow the method from DeepSeek-R1, wherein Pass@1 is computed by averaging the results across 32 sampled responses per prompt, while Cons@32 is determined through self-consistency analysis of the same 32 sampled responses for each prompt. For other benchmarks, we only sample 1 response and report the Pass@1.
|
179 |
+
|
180 |
+
| Models | AIME24 Pass@1 | AIME24 Cons@32 | MATH500 | OlympiadBench |
|
181 |
+
|:------ | --------------| ------------------- | ------------ | -------------- |
|
182 |
+
| OpenAI o1 | 79.2 | - | 96.4 | - |
|
183 |
+
| OpenAI o1-preview | 44.6 | - | 85.5 | - |
|
184 |
+
| OpenAI o1-mini | 63.6 | - | 90.0 | - |
|
185 |
+
| DeepSeek R1 | 79.8 | - | 97.3 | - |
|
186 |
+
| [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 69.2 | 83.3 | 93.6 | 64.3 |
|
187 |
+
| [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 43.8 | 56.7 | 88.4 | 60.3 |
|
188 |
+
| [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 37.7 | 50.0 | 88.0 | 55.1 |
|
189 |
+
| [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | 17.0 | 20.0 | 81.8 | 48.1 |
|
190 |
+
| [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | 68.6 | 83.3 | 94.6 | 64.9 |
|
191 |
+
| [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 69.7 | 83.3 | 94.6 | 64.0 |
|
192 |
+
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 74.0 | 86.7 | 94.8 | 65.0 |
|
193 |
+
|
194 |
+
We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on math reasoning. Specifically, our model achieves an accuracy of **74.0 Pass@1 and 86.7 Cons@32 on AIME24**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (69.2 Pass@1 and 83.3 Cons@32), OpenAI o1-preview (44.6 Pass@1) and OpenAI o1-mini (63.4 Pass@1), even approaching OpenAI o1 (79.2 Pass@1).
|
195 |
+
|
196 |
+
### Scientific Reasoning
|
197 |
|
198 |
The evaluation code is modified from [SkyThought](https://github.com/NovaSky-AI/SkyThought). In our evaluation, we set the temperature to 0.7 and the max_tokens to 32768. We provide the example to reproduce our results in [evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/evaluation).
|
199 |
|
|
|
203 |
You are a helpful and harmless assistant. You should think step-by-step.
|
204 |
```
|
205 |
|
206 |
+
The evaluation results are shown in the table below:
|
207 |
+
|
208 |
+
| Models | GPQA-Diamond| MMLU-Pro | MMLU |
|
209 |
+
|:------ | --------------| ------------ | -------------- |
|
210 |
+
| OpenAI o1 | 75.7 | - | 91.8 |
|
211 |
+
| OpenAI o1-preview | 73.3 | - | 90.8 |
|
212 |
+
| OpenAI o1-mini | 60.0 | 80.3 | 85.2 |
|
213 |
+
| DeepSeek R1 | 71.5 | 84.0 | 90.8 |
|
214 |
+
| [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 57.6 | 68.7 | 82.2 |
|
215 |
+
| [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 49.5 | 63.5 | 85.2 |
|
216 |
+
| [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 50.5 | 65.8 | 82.7 |
|
217 |
+
| [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | 46.5 | 56.3 | 79.6 |
|
218 |
+
| [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | 55.1 | 68.6 | 82.0 |
|
219 |
+
| [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 62.1 | 68.9 | 82.7 |
|
220 |
+
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 62.1 | 70.8 | 83.6 |
|
221 |
+
|
222 |
+
We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on scientific reasoning. Specifically, our model achieves an accuracy of **62.1 on GPQA-Diamond and 70.8 on MMLU-Pro**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (57.6 on GPQA-Diamond and 68.7 on MMLU-Pro).
|
223 |
+
|
224 |
+
|
225 |
+
## Code Reasoning
|
226 |
+
|
227 |
+
The evaluation code is modified from [Qwen2.5-Coder](https://github.com/QwenLM/Qwen2.5-Coder/tree/main/qwencoder-eval/reasoning/livecode_bench_cot). In our evaluation, we set the temperature to 0.6, the top-p to 0.95 and the max_tokens to 32768. We provide the example to reproduce our results in [code_evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/code_evaluation).
|
228 |
+
|
229 |
+
The system prompt for evaluation is set to:
|
230 |
+
|
231 |
+
```sh
|
232 |
+
A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>.
|
233 |
+
```
|
234 |
+
|
235 |
+
In our evaluation of LiveCodeBench, we follow the method from DeepSeek-R1 and make a slight modification. The Pass@1 is computed by averaging the results across 16 sampled responses per prompt.
|
236 |
|
237 |
The evaluation results are shown in the table below:
|
238 |
|
239 |
+
| Models | LiveCodeBench | LiveCodeBench-Easy | LiveCodeBench-Medium | LiveCodeBench-Hard |
|
240 |
+
|:------ | --------------| ------------------- | ------------ | -------------- |
|
241 |
+
| OpenAI o1 | 63.4 | 98.5 | 80.9 | 31.7 |
|
242 |
+
| OpenAI o1-preview | 42.7 | 97.0 | 47.2 | 9.8 |
|
243 |
+
| OpenAI o1-mini | 52.00 | 91.0 | 67.4 | 19.5 |
|
244 |
+
| DeepSeek R1 | 62.8 | 98.4 | 78.3 | 32.2 |
|
245 |
+
| [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 56.1 | 93.6 | 73.1 | 23.4 |
|
246 |
+
| [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 44.4 | 94.9 | 53.8 | 10.0 |
|
247 |
+
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 57.9 | 93.6 | 76.0 | 25.5 |
|
248 |
+
|
249 |
+
We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on scientific reasoning. Specifically, our model achieves an accuracy of **57.9 on LiveCodeBench and 25.5 on LiveCodeBench-Hard**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (56.1 on LiveCodeBench and 23.4 on LiveCodeBench-Hard), OpenAI o1-preview (42.7 on LiveCodeBench and 9.8 on LiveCodeBench-Hard) and OpenAI o1-mini (52.0 on LiveCodeBench and 19.5 on LiveCodeBench-Hard Pass@1).
|
250 |
|
251 |
## Future Works
|
252 |
|