leaderboard-pr-bot commited on
Commit
79c6a9a
1 Parent(s): 1afdc98

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +113 -8
README.md CHANGED
@@ -1,9 +1,15 @@
1
  ---
 
 
 
 
 
 
 
2
  base_model:
3
  - meta-llama/Llama-3.1-8B-Instruct
4
  - sequelbox/Llama3.1-8B-MOTH
5
  - ValiantLabs/Llama3.1-8B-ShiningValiant2
6
- library_name: transformers
7
  model-index:
8
  - name: Llama3.1-8B-PlumChat
9
  results:
@@ -19,13 +25,98 @@ model-index:
19
  - type: acc
20
  value: 72.22
21
  name: acc
22
- tags:
23
- - mergekit
24
- - merge
25
- - conversational
26
- - chat
27
- - instruct
28
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ---
30
  # PlumChat
31
 
@@ -63,3 +154,17 @@ models:
63
  base_model: meta-llama/Llama-3.1-8B-Instruct
64
 
65
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ tags:
4
+ - mergekit
5
+ - merge
6
+ - conversational
7
+ - chat
8
+ - instruct
9
  base_model:
10
  - meta-llama/Llama-3.1-8B-Instruct
11
  - sequelbox/Llama3.1-8B-MOTH
12
  - ValiantLabs/Llama3.1-8B-ShiningValiant2
 
13
  model-index:
14
  - name: Llama3.1-8B-PlumChat
15
  results:
 
25
  - type: acc
26
  value: 72.22
27
  name: acc
28
+ - task:
29
+ type: text-generation
30
+ name: Text Generation
31
+ dataset:
32
+ name: IFEval (0-Shot)
33
+ type: HuggingFaceH4/ifeval
34
+ args:
35
+ num_few_shot: 0
36
+ metrics:
37
+ - type: inst_level_strict_acc and prompt_level_strict_acc
38
+ value: 42.43
39
+ name: strict accuracy
40
+ source:
41
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
42
+ name: Open LLM Leaderboard
43
+ - task:
44
+ type: text-generation
45
+ name: Text Generation
46
+ dataset:
47
+ name: BBH (3-Shot)
48
+ type: BBH
49
+ args:
50
+ num_few_shot: 3
51
+ metrics:
52
+ - type: acc_norm
53
+ value: 13.94
54
+ name: normalized accuracy
55
+ source:
56
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: MATH Lvl 5 (4-Shot)
63
+ type: hendrycks/competition_math
64
+ args:
65
+ num_few_shot: 4
66
+ metrics:
67
+ - type: exact_match
68
+ value: 3.1
69
+ name: exact match
70
+ source:
71
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: GPQA (0-shot)
78
+ type: Idavidrein/gpqa
79
+ args:
80
+ num_few_shot: 0
81
+ metrics:
82
+ - type: acc_norm
83
+ value: 2.01
84
+ name: acc_norm
85
+ source:
86
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
87
+ name: Open LLM Leaderboard
88
+ - task:
89
+ type: text-generation
90
+ name: Text Generation
91
+ dataset:
92
+ name: MuSR (0-shot)
93
+ type: TAUR-Lab/MuSR
94
+ args:
95
+ num_few_shot: 0
96
+ metrics:
97
+ - type: acc_norm
98
+ value: 4.77
99
+ name: acc_norm
100
+ source:
101
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
102
+ name: Open LLM Leaderboard
103
+ - task:
104
+ type: text-generation
105
+ name: Text Generation
106
+ dataset:
107
+ name: MMLU-PRO (5-shot)
108
+ type: TIGER-Lab/MMLU-Pro
109
+ config: main
110
+ split: test
111
+ args:
112
+ num_few_shot: 5
113
+ metrics:
114
+ - type: acc
115
+ value: 12.52
116
+ name: accuracy
117
+ source:
118
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sequelbox/Llama3.1-8B-PlumChat
119
+ name: Open LLM Leaderboard
120
  ---
121
  # PlumChat
122
 
 
154
  base_model: meta-llama/Llama-3.1-8B-Instruct
155
 
156
  ```
157
+
158
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
159
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sequelbox__Llama3.1-8B-PlumChat)
160
+
161
+ | Metric |Value|
162
+ |-------------------|----:|
163
+ |Avg. |13.13|
164
+ |IFEval (0-Shot) |42.43|
165
+ |BBH (3-Shot) |13.94|
166
+ |MATH Lvl 5 (4-Shot)| 3.10|
167
+ |GPQA (0-shot) | 2.01|
168
+ |MuSR (0-shot) | 4.77|
169
+ |MMLU-PRO (5-shot) |12.52|
170
+