Adding Evaluation Results

#1
by sthenno - opened
Files changed (1) hide show
  1. README.md +117 -9
README.md CHANGED
@@ -1,19 +1,114 @@
1
  ---
2
- license: apache-2.0
3
- datasets:
4
- - nvidia/HelpSteer2
5
  language:
6
  - en
7
  - zh
8
- metrics:
9
- - accuracy
10
- base_model:
11
- - sthenno/tempesthenno-14b-nuslerp-0111
12
- - sthenno/tempesthenno-hs2-rm
13
  tags:
14
  - RLHF
15
  - PPO
16
  - custom-research
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ---
18
  # tempesthenno--nuslerp (BASE MODEL)
19
 
@@ -122,4 +217,17 @@ slices:
122
  weight: 0.60
123
  nuslerp_flatten: true
124
 
125
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  language:
3
  - en
4
  - zh
5
+ license: apache-2.0
 
 
 
 
6
  tags:
7
  - RLHF
8
  - PPO
9
  - custom-research
10
+ base_model:
11
+ - sthenno/tempesthenno-14b-nuslerp-0111
12
+ - sthenno/tempesthenno-hs2-rm
13
+ datasets:
14
+ - nvidia/HelpSteer2
15
+ metrics:
16
+ - accuracy
17
+ model-index:
18
+ - name: tempesthenno-ppo-ckpt40
19
+ results:
20
+ - task:
21
+ type: text-generation
22
+ name: Text Generation
23
+ dataset:
24
+ name: IFEval (0-Shot)
25
+ type: HuggingFaceH4/ifeval
26
+ args:
27
+ num_few_shot: 0
28
+ metrics:
29
+ - type: inst_level_strict_acc and prompt_level_strict_acc
30
+ value: 79.23
31
+ name: strict accuracy
32
+ source:
33
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno/tempesthenno-ppo-ckpt40
34
+ name: Open LLM Leaderboard
35
+ - task:
36
+ type: text-generation
37
+ name: Text Generation
38
+ dataset:
39
+ name: BBH (3-Shot)
40
+ type: BBH
41
+ args:
42
+ num_few_shot: 3
43
+ metrics:
44
+ - type: acc_norm
45
+ value: 50.57
46
+ name: normalized accuracy
47
+ source:
48
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno/tempesthenno-ppo-ckpt40
49
+ name: Open LLM Leaderboard
50
+ - task:
51
+ type: text-generation
52
+ name: Text Generation
53
+ dataset:
54
+ name: MATH Lvl 5 (4-Shot)
55
+ type: hendrycks/competition_math
56
+ args:
57
+ num_few_shot: 4
58
+ metrics:
59
+ - type: exact_match
60
+ value: 34.21
61
+ name: exact match
62
+ source:
63
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno/tempesthenno-ppo-ckpt40
64
+ name: Open LLM Leaderboard
65
+ - task:
66
+ type: text-generation
67
+ name: Text Generation
68
+ dataset:
69
+ name: GPQA (0-shot)
70
+ type: Idavidrein/gpqa
71
+ args:
72
+ num_few_shot: 0
73
+ metrics:
74
+ - type: acc_norm
75
+ value: 17.0
76
+ name: acc_norm
77
+ source:
78
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno/tempesthenno-ppo-ckpt40
79
+ name: Open LLM Leaderboard
80
+ - task:
81
+ type: text-generation
82
+ name: Text Generation
83
+ dataset:
84
+ name: MuSR (0-shot)
85
+ type: TAUR-Lab/MuSR
86
+ args:
87
+ num_few_shot: 0
88
+ metrics:
89
+ - type: acc_norm
90
+ value: 14.56
91
+ name: acc_norm
92
+ source:
93
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno/tempesthenno-ppo-ckpt40
94
+ name: Open LLM Leaderboard
95
+ - task:
96
+ type: text-generation
97
+ name: Text Generation
98
+ dataset:
99
+ name: MMLU-PRO (5-shot)
100
+ type: TIGER-Lab/MMLU-Pro
101
+ config: main
102
+ split: test
103
+ args:
104
+ num_few_shot: 5
105
+ metrics:
106
+ - type: acc
107
+ value: 47.69
108
+ name: accuracy
109
+ source:
110
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sthenno/tempesthenno-ppo-ckpt40
111
+ name: Open LLM Leaderboard
112
  ---
113
  # tempesthenno--nuslerp (BASE MODEL)
114
 
 
217
  weight: 0.60
218
  nuslerp_flatten: true
219
 
220
+ ```
221
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
222
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/sthenno__tempesthenno-ppo-ckpt40-details)
223
+
224
+ | Metric |Value|
225
+ |-------------------|----:|
226
+ |Avg. |40.55|
227
+ |IFEval (0-Shot) |79.23|
228
+ |BBH (3-Shot) |50.57|
229
+ |MATH Lvl 5 (4-Shot)|34.21|
230
+ |GPQA (0-shot) |17.00|
231
+ |MuSR (0-shot) |14.56|
232
+ |MMLU-PRO (5-shot) |47.69|
233
+