T145 commited on
Commit
184eabb
·
verified ·
1 Parent(s): fe2d935

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

Please report any issues here: https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard/discussions

Files changed (1) hide show
  1. README.md +110 -1
README.md CHANGED
@@ -10,6 +10,101 @@ tags:
10
  base_model:
11
  - meta-llama/Meta-Llama-3.1-8B
12
  library_name: transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
  ## Llama-VARCO-8B-Instruct
@@ -73,4 +168,18 @@ We used the [LogicKor](https://github.com/instructkr/LogicKor) code to measure p
73
  | [EXAONE-3.0-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct)| 6.86 / 7.71 | 8.57 / 6.71 | 10.0 / 9.29 | 9.43 / 10.0 | 10.0 / 10.0 | 9.57 / 5.14 | 9.07 | 8.14 | 8.61 |
74
  | [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)| 4.29 / 4.86 | 6.43 / 6.57 | 6.71 / 5.14 | 6.57 / 6.00 | 4.29 / 4.14 | 6.00 / 4.00 | 5.71 | 5.12 | 5.42 |
75
  | [Gemma-2-9B-Instruct](https://huggingface.co/google/gemma-2-9b-it)| 6.14 / 5.86 | 9.29 / 9.0 | 9.29 / 8.57 | 9.29 / 9.14 | 8.43 / 8.43 | 7.86 / 4.43 | 8.38 | 7.57 | 7.98
76
- | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)| 5.57 / 4.86 | 7.71 / 6.43 | 7.43 / 7.00 | 7.43 / 8.00 | 7.86 / 8.71 | 6.29 / 3.29 | 7.05 | 6.38 | 6.71 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  base_model:
11
  - meta-llama/Meta-Llama-3.1-8B
12
  library_name: transformers
13
+ model-index:
14
+ - name: Llama-VARCO-8B-Instruct
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: Text Generation
19
+ dataset:
20
+ name: IFEval (0-Shot)
21
+ type: HuggingFaceH4/ifeval
22
+ args:
23
+ num_few_shot: 0
24
+ metrics:
25
+ - type: inst_level_strict_acc and prompt_level_strict_acc
26
+ value: 44.7
27
+ name: strict accuracy
28
+ source:
29
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NCSOFT/Llama-VARCO-8B-Instruct
30
+ name: Open LLM Leaderboard
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: BBH (3-Shot)
36
+ type: BBH
37
+ args:
38
+ num_few_shot: 3
39
+ metrics:
40
+ - type: acc_norm
41
+ value: 29.18
42
+ name: normalized accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NCSOFT/Llama-VARCO-8B-Instruct
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: MATH Lvl 5 (4-Shot)
51
+ type: hendrycks/competition_math
52
+ args:
53
+ num_few_shot: 4
54
+ metrics:
55
+ - type: exact_match
56
+ value: 9.97
57
+ name: exact match
58
+ source:
59
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NCSOFT/Llama-VARCO-8B-Instruct
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: GPQA (0-shot)
66
+ type: Idavidrein/gpqa
67
+ args:
68
+ num_few_shot: 0
69
+ metrics:
70
+ - type: acc_norm
71
+ value: 6.26
72
+ name: acc_norm
73
+ source:
74
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NCSOFT/Llama-VARCO-8B-Instruct
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: MuSR (0-shot)
81
+ type: TAUR-Lab/MuSR
82
+ args:
83
+ num_few_shot: 0
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 10.78
87
+ name: acc_norm
88
+ source:
89
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NCSOFT/Llama-VARCO-8B-Instruct
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MMLU-PRO (5-shot)
96
+ type: TIGER-Lab/MMLU-Pro
97
+ config: main
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 24.33
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NCSOFT/Llama-VARCO-8B-Instruct
107
+ name: Open LLM Leaderboard
108
  ---
109
 
110
  ## Llama-VARCO-8B-Instruct
 
168
  | [EXAONE-3.0-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct)| 6.86 / 7.71 | 8.57 / 6.71 | 10.0 / 9.29 | 9.43 / 10.0 | 10.0 / 10.0 | 9.57 / 5.14 | 9.07 | 8.14 | 8.61 |
169
  | [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)| 4.29 / 4.86 | 6.43 / 6.57 | 6.71 / 5.14 | 6.57 / 6.00 | 4.29 / 4.14 | 6.00 / 4.00 | 5.71 | 5.12 | 5.42 |
170
  | [Gemma-2-9B-Instruct](https://huggingface.co/google/gemma-2-9b-it)| 6.14 / 5.86 | 9.29 / 9.0 | 9.29 / 8.57 | 9.29 / 9.14 | 8.43 / 8.43 | 7.86 / 4.43 | 8.38 | 7.57 | 7.98
171
+ | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)| 5.57 / 4.86 | 7.71 / 6.43 | 7.43 / 7.00 | 7.43 / 8.00 | 7.86 / 8.71 | 6.29 / 3.29 | 7.05 | 6.38 | 6.71 |
172
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
173
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/NCSOFT__Llama-VARCO-8B-Instruct-details)!
174
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=NCSOFT/Llama-VARCO-8B-Instruct)!
175
+
176
+ | Metric |% Value|
177
+ |-------------------|------:|
178
+ |Avg. | 20.87|
179
+ |IFEval (0-Shot) | 44.70|
180
+ |BBH (3-Shot) | 29.18|
181
+ |MATH Lvl 5 (4-Shot)| 9.97|
182
+ |GPQA (0-shot) | 6.26|
183
+ |MuSR (0-shot) | 10.78|
184
+ |MMLU-PRO (5-shot) | 24.33|
185
+