leaderboard-pr-bot commited on
Commit
27a5b4f
1 Parent(s): 1e1796b

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +121 -5
README.md CHANGED
@@ -1,14 +1,117 @@
1
  ---
2
- library_name: peft
3
- base_model: mistralai/Mistral-7B-v0.1
4
  license: mit
 
5
  datasets:
6
  - AtlasUnified/atlas-storyteller
7
- language:
8
- - en
9
  metrics:
10
  - perplexity
 
11
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  # Model Card for Model ID
@@ -225,4 +328,17 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
225
 
226
  ### Framework versions
227
 
228
- - PEFT 0.7.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: mit
5
+ library_name: peft
6
  datasets:
7
  - AtlasUnified/atlas-storyteller
 
 
8
  metrics:
9
  - perplexity
10
+ base_model: mistralai/Mistral-7B-v0.1
11
  pipeline_tag: text-generation
12
+ model-index:
13
+ - name: Mistral-7B-LoreWeaver
14
+ results:
15
+ - task:
16
+ type: text-generation
17
+ name: Text Generation
18
+ dataset:
19
+ name: AI2 Reasoning Challenge (25-Shot)
20
+ type: ai2_arc
21
+ config: ARC-Challenge
22
+ split: test
23
+ args:
24
+ num_few_shot: 25
25
+ metrics:
26
+ - type: acc_norm
27
+ value: 59.98
28
+ name: normalized accuracy
29
+ source:
30
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Reverb/Mistral-7B-LoreWeaver
31
+ name: Open LLM Leaderboard
32
+ - task:
33
+ type: text-generation
34
+ name: Text Generation
35
+ dataset:
36
+ name: HellaSwag (10-Shot)
37
+ type: hellaswag
38
+ split: validation
39
+ args:
40
+ num_few_shot: 10
41
+ metrics:
42
+ - type: acc_norm
43
+ value: 83.29
44
+ name: normalized accuracy
45
+ source:
46
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Reverb/Mistral-7B-LoreWeaver
47
+ name: Open LLM Leaderboard
48
+ - task:
49
+ type: text-generation
50
+ name: Text Generation
51
+ dataset:
52
+ name: MMLU (5-Shot)
53
+ type: cais/mmlu
54
+ config: all
55
+ split: test
56
+ args:
57
+ num_few_shot: 5
58
+ metrics:
59
+ - type: acc
60
+ value: 64.12
61
+ name: accuracy
62
+ source:
63
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Reverb/Mistral-7B-LoreWeaver
64
+ name: Open LLM Leaderboard
65
+ - task:
66
+ type: text-generation
67
+ name: Text Generation
68
+ dataset:
69
+ name: TruthfulQA (0-shot)
70
+ type: truthful_qa
71
+ config: multiple_choice
72
+ split: validation
73
+ args:
74
+ num_few_shot: 0
75
+ metrics:
76
+ - type: mc2
77
+ value: 42.15
78
+ source:
79
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Reverb/Mistral-7B-LoreWeaver
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: Winogrande (5-shot)
86
+ type: winogrande
87
+ config: winogrande_xl
88
+ split: validation
89
+ args:
90
+ num_few_shot: 5
91
+ metrics:
92
+ - type: acc
93
+ value: 78.37
94
+ name: accuracy
95
+ source:
96
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Reverb/Mistral-7B-LoreWeaver
97
+ name: Open LLM Leaderboard
98
+ - task:
99
+ type: text-generation
100
+ name: Text Generation
101
+ dataset:
102
+ name: GSM8k (5-shot)
103
+ type: gsm8k
104
+ config: main
105
+ split: test
106
+ args:
107
+ num_few_shot: 5
108
+ metrics:
109
+ - type: acc
110
+ value: 37.68
111
+ name: accuracy
112
+ source:
113
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Reverb/Mistral-7B-LoreWeaver
114
+ name: Open LLM Leaderboard
115
  ---
116
 
117
  # Model Card for Model ID
 
328
 
329
  ### Framework versions
330
 
331
+ - PEFT 0.7.1
332
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
333
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Reverb__Mistral-7B-LoreWeaver)
334
+
335
+ | Metric |Value|
336
+ |---------------------------------|----:|
337
+ |Avg. |60.93|
338
+ |AI2 Reasoning Challenge (25-Shot)|59.98|
339
+ |HellaSwag (10-Shot) |83.29|
340
+ |MMLU (5-Shot) |64.12|
341
+ |TruthfulQA (0-shot) |42.15|
342
+ |Winogrande (5-shot) |78.37|
343
+ |GSM8k (5-shot) |37.68|
344
+