Files changed (1) hide show
  1. README.md +106 -0
README.md CHANGED
@@ -107,6 +107,98 @@ model-index:
107
  source:
108
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MTSAIR/MultiVerse_70B
109
  name: Open LLM Leaderboard
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
110
  ---
111
  ## This model is based on Qwen 72B
112
  **Note:**
@@ -126,3 +218,17 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
126
  |Winogrande (5-shot) |87.53|
127
  |GSM8k (5-shot) |76.65|
128
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
  source:
108
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MTSAIR/MultiVerse_70B
109
  name: Open LLM Leaderboard
110
+ - task:
111
+ type: text-generation
112
+ name: Text Generation
113
+ dataset:
114
+ name: IFEval (0-Shot)
115
+ type: HuggingFaceH4/ifeval
116
+ args:
117
+ num_few_shot: 0
118
+ metrics:
119
+ - type: inst_level_strict_acc and prompt_level_strict_acc
120
+ value: 52.49
121
+ name: strict accuracy
122
+ source:
123
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MTSAIR/MultiVerse_70B
124
+ name: Open LLM Leaderboard
125
+ - task:
126
+ type: text-generation
127
+ name: Text Generation
128
+ dataset:
129
+ name: BBH (3-Shot)
130
+ type: BBH
131
+ args:
132
+ num_few_shot: 3
133
+ metrics:
134
+ - type: acc_norm
135
+ value: 46.14
136
+ name: normalized accuracy
137
+ source:
138
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MTSAIR/MultiVerse_70B
139
+ name: Open LLM Leaderboard
140
+ - task:
141
+ type: text-generation
142
+ name: Text Generation
143
+ dataset:
144
+ name: MATH Lvl 5 (4-Shot)
145
+ type: hendrycks/competition_math
146
+ args:
147
+ num_few_shot: 4
148
+ metrics:
149
+ - type: exact_match
150
+ value: 16.16
151
+ name: exact match
152
+ source:
153
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MTSAIR/MultiVerse_70B
154
+ name: Open LLM Leaderboard
155
+ - task:
156
+ type: text-generation
157
+ name: Text Generation
158
+ dataset:
159
+ name: GPQA (0-shot)
160
+ type: Idavidrein/gpqa
161
+ args:
162
+ num_few_shot: 0
163
+ metrics:
164
+ - type: acc_norm
165
+ value: 13.87
166
+ name: acc_norm
167
+ source:
168
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MTSAIR/MultiVerse_70B
169
+ name: Open LLM Leaderboard
170
+ - task:
171
+ type: text-generation
172
+ name: Text Generation
173
+ dataset:
174
+ name: MuSR (0-shot)
175
+ type: TAUR-Lab/MuSR
176
+ args:
177
+ num_few_shot: 0
178
+ metrics:
179
+ - type: acc_norm
180
+ value: 18.82
181
+ name: acc_norm
182
+ source:
183
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MTSAIR/MultiVerse_70B
184
+ name: Open LLM Leaderboard
185
+ - task:
186
+ type: text-generation
187
+ name: Text Generation
188
+ dataset:
189
+ name: MMLU-PRO (5-shot)
190
+ type: TIGER-Lab/MMLU-Pro
191
+ config: main
192
+ split: test
193
+ args:
194
+ num_few_shot: 5
195
+ metrics:
196
+ - type: acc
197
+ value: 42.89
198
+ name: accuracy
199
+ source:
200
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MTSAIR/MultiVerse_70B
201
+ name: Open LLM Leaderboard
202
  ---
203
  ## This model is based on Qwen 72B
204
  **Note:**
 
218
  |Winogrande (5-shot) |87.53|
219
  |GSM8k (5-shot) |76.65|
220
 
221
+
222
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
223
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MTSAIR__MultiVerse_70B)
224
+
225
+ | Metric |Value|
226
+ |-------------------|----:|
227
+ |Avg. |31.73|
228
+ |IFEval (0-Shot) |52.49|
229
+ |BBH (3-Shot) |46.14|
230
+ |MATH Lvl 5 (4-Shot)|16.16|
231
+ |GPQA (0-shot) |13.87|
232
+ |MuSR (0-shot) |18.82|
233
+ |MMLU-PRO (5-shot) |42.89|
234
+