una-neural-chat-v3-3-phase2
OMA, OneManArmy proudly presents, una-neural-chat-v3-3
PHASE 2. Powered by UNA (Uniform Neural Alignment), using zephyr trainer, allenai/ultrafeedback cleaned.. and JUST THAT.
Outperforming its base model, not adding any data.. just UNA Algorythm on Transformers Lib.
UNA Settings:
- MLP : 0.05
- ATT : 0.03
- LNOR : 0.02
Framework versions
- Transformers 4.35.0-UNA
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 70.72 |
AI2 Reasoning Challenge (25-Shot) | 67.32 |
HellaSwag (10-Shot) | 86.33 |
MMLU (5-Shot) | 63.14 |
TruthfulQA (0-shot) | 65.49 |
Winogrande (5-shot) | 79.79 |
GSM8k (5-shot) | 62.24 |
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for one-man-army/una-neural-chat-v3-3-P2-OMA
Base model
mistralai/Mistral-7B-v0.1
Finetuned
Intel/neural-chat-7b-v3-1
Finetuned
Intel/neural-chat-7b-v3-3
Finetuned
one-man-army/una-neural-chat-v3-3-P1-OMA