--- license: apache-2.0 datasets: - abacusai/MetaMathFewshot --- Finetune of the pre-DPO Bagel model (https://huggingface.co/jondurbin/bagel-34b-v0.2) on the MetamathFewshot (https://huggingface.co/datasets/abacusai/MetaMathFewshot) dataset ### Evaluation Results | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | | | | | | | | | For comparison the GSM8K score for the original `metamath/MetaMath-Mistral-7B` was 46.17 and average score was 69.7