Text Generation
Transformers
Safetensors
English
falcon
text-generation-inference
conversational
Eval Results
ericzzz commited on
Commit
47302c3
Β·
verified Β·
1 Parent(s): 5fd4a73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -31
README.md CHANGED
@@ -8,7 +8,6 @@ datasets:
8
  - HuggingFaceH4/ultrachat_200k
9
  - openchat/openchat_sharegpt4_dataset
10
  - Open-Orca/SlimOrca
11
- pipeline_tag: conversational
12
  inference: false
13
  model-index:
14
  - name: falcon-rw-1b-chat
@@ -28,7 +27,8 @@ model-index:
28
  value: 35.58
29
  name: normalized accuracy
30
  source:
31
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
 
32
  name: Open LLM Leaderboard
33
  - task:
34
  type: text-generation
@@ -44,7 +44,8 @@ model-index:
44
  value: 61.12
45
  name: normalized accuracy
46
  source:
47
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
 
48
  name: Open LLM Leaderboard
49
  - task:
50
  type: text-generation
@@ -61,7 +62,8 @@ model-index:
61
  value: 24.51
62
  name: accuracy
63
  source:
64
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
 
65
  name: Open LLM Leaderboard
66
  - task:
67
  type: text-generation
@@ -77,7 +79,8 @@ model-index:
77
  - type: mc2
78
  value: 39.62
79
  source:
80
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
 
81
  name: Open LLM Leaderboard
82
  - task:
83
  type: text-generation
@@ -94,7 +97,8 @@ model-index:
94
  value: 61.72
95
  name: accuracy
96
  source:
97
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
 
98
  name: Open LLM Leaderboard
99
  - task:
100
  type: text-generation
@@ -111,8 +115,10 @@ model-index:
111
  value: 1.67
112
  name: accuracy
113
  source:
114
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
 
115
  name: Open LLM Leaderboard
 
116
  ---
117
  # 🌟 Falcon-RW-1B-Chat
118
 
@@ -122,19 +128,18 @@ model-index:
122
 
123
  The underlying Falcon-RW-1B-Instruct-OpenOrca model is built on the [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b), a causal decoder-only model. It has been instruction-finetuned using the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset.
124
 
125
- **πŸ“Š Evaluation Results**
126
-
127
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-chat).
128
 
129
- | Metric | Falcon-RW-1b-Chat | Falcon-RW-1b |
130
- |-------------|---------------------------|---------------------|
131
- | ARC | 35.58 | 35.07 |
132
- | HellaSwag | 61.12 | 63.56 |
133
- | MMLU | 24.51 | 25.28 |
134
- | TruthfulQA | 39.62 | 35.96 |
135
- | Winogrande | 61.72 | 62.04 |
136
- | GSM8K | 1.67 | 0.53 |
137
- | **Average** | **37.37** | **37.07** |
138
 
139
  **🎯 Purpose**
140
 
@@ -185,16 +190,4 @@ The model is provided 'as is' without any warranties, and the creators are not l
185
  ## πŸ“¬ Contact
186
 
187
  For further inquiries or feedback, please contact at [email protected].
188
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
189
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-chat)
190
-
191
- | Metric |Value|
192
- |---------------------------------|----:|
193
- |Avg. |37.37|
194
- |AI2 Reasoning Challenge (25-Shot)|35.58|
195
- |HellaSwag (10-Shot) |61.12|
196
- |MMLU (5-Shot) |24.51|
197
- |TruthfulQA (0-shot) |39.62|
198
- |Winogrande (5-shot) |61.72|
199
- |GSM8k (5-shot) | 1.67|
200
 
 
8
  - HuggingFaceH4/ultrachat_200k
9
  - openchat/openchat_sharegpt4_dataset
10
  - Open-Orca/SlimOrca
 
11
  inference: false
12
  model-index:
13
  - name: falcon-rw-1b-chat
 
27
  value: 35.58
28
  name: normalized accuracy
29
  source:
30
+ url: >-
31
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
32
  name: Open LLM Leaderboard
33
  - task:
34
  type: text-generation
 
44
  value: 61.12
45
  name: normalized accuracy
46
  source:
47
+ url: >-
48
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
49
  name: Open LLM Leaderboard
50
  - task:
51
  type: text-generation
 
62
  value: 24.51
63
  name: accuracy
64
  source:
65
+ url: >-
66
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
67
  name: Open LLM Leaderboard
68
  - task:
69
  type: text-generation
 
79
  - type: mc2
80
  value: 39.62
81
  source:
82
+ url: >-
83
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
84
  name: Open LLM Leaderboard
85
  - task:
86
  type: text-generation
 
97
  value: 61.72
98
  name: accuracy
99
  source:
100
+ url: >-
101
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
102
  name: Open LLM Leaderboard
103
  - task:
104
  type: text-generation
 
115
  value: 1.67
116
  name: accuracy
117
  source:
118
+ url: >-
119
+ https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
120
  name: Open LLM Leaderboard
121
+ pipeline_tag: text-generation
122
  ---
123
  # 🌟 Falcon-RW-1B-Chat
124
 
 
128
 
129
  The underlying Falcon-RW-1B-Instruct-OpenOrca model is built on the [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b), a causal decoder-only model. It has been instruction-finetuned using the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset.
130
 
131
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
132
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-chat)
 
133
 
134
+ | Metric |Value|
135
+ |---------------------------------|----:|
136
+ |Avg. |37.37|
137
+ |AI2 Reasoning Challenge (25-Shot)|35.58|
138
+ |HellaSwag (10-Shot) |61.12|
139
+ |MMLU (5-Shot) |24.51|
140
+ |TruthfulQA (0-shot) |39.62|
141
+ |Winogrande (5-shot) |61.72|
142
+ |GSM8k (5-shot) | 1.67|
143
 
144
  **🎯 Purpose**
145
 
 
190
  ## πŸ“¬ Contact
191
 
192
  For further inquiries or feedback, please contact at [email protected].
 
 
 
 
 
 
 
 
 
 
 
 
193