image/png

Gemma2-Gutenberg-Doppel-9B

UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO finetuned using 2x A40 for 3 epochs.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 29.82
IFEval (0-Shot) 71.71
BBH (3-Shot) 41.08
MATH Lvl 5 (4-Shot) 3.47
GPQA (0-shot) 10.63
MuSR (0-shot) 17.30
MMLU-PRO (5-shot) 34.75
Downloads last month
34
Safetensors
Model size
9.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for nbeerbower/Gemma2-Gutenberg-Doppel-9B

Finetuned
(10)
this model
Finetunes
1 model
Merges
17 models
Quantizations
7 models

Datasets used to train nbeerbower/Gemma2-Gutenberg-Doppel-9B

Evaluation results