Model Card for Model ID
Gemma-2-9b model, finetuned with ORPO trainer
Training Procedure
Trained with ORPOTrainer with rsLoRA.
Dataset
Trained on mlabonne/orpo-dpo-mix-40k dataset.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 25.21 |
IFEval (0-Shot) | 58.76 |
BBH (3-Shot) | 35.64 |
MATH Lvl 5 (4-Shot) | 8.23 |
GPQA (0-shot) | 9.73 |
MuSR (0-shot) | 5.79 |
MMLU-PRO (5-shot) | 33.12 |
- Downloads last month
- 16
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for BlackBeenie/Neos-Gemma-2-9b
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard58.760
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard35.640
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard8.230
- acc_norm on GPQA (0-shot)Open LLM Leaderboard9.730
- acc_norm on MuSR (0-shot)Open LLM Leaderboard5.790
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard33.120