ChanMalion / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
000050b
|
raw
history blame
694 Bytes

GPT-J_4Chan Merged 50/50 with Pygmalion-6b.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.74
ARC (25-shot) 41.89
HellaSwag (10-shot) 68.25
MMLU (5-shot) 27.29
TruthfulQA (0-shot) 33.89
Winogrande (5-shot) 65.35
GSM8K (5-shot) 1.67
DROP (3-shot) 4.85