leaderboard-pr-bot's picture
Adding Evaluation Results
95382c5 verified
|
raw
history blame
4.17 kB
metadata
license: other
library_name: transformers
base_model:
  - mistralai/Mistral-Small-Instruct-2409
datasets:
  - jondurbin/gutenberg-dpo-v0.1
  - nbeerbower/gutenberg2-dpo
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
model-index:
  - name: Mistral-Small-Drummer-22B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 63.31
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 40.12
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 16.69
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 12.42
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 9.8
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 34.39
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
          name: Open LLM Leaderboard

Mistral-Small-Drummer-22B

mistralai/Mistral-Small-Instruct-2409 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO tuned with 2xA40 on RunPod for 1 epoch.

learning_rate=4e-6,
lr_scheduler_type="linear",
beta=0.1,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=8,
optim="paged_adamw_8bit",
num_train_epochs=1,

Dataset was prepared using Mistral-Small Instruct format.

Fine-tune Llama 3 with ORPO

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 29.45
IFEval (0-Shot) 63.31
BBH (3-Shot) 40.12
MATH Lvl 5 (4-Shot) 16.69
GPQA (0-shot) 12.42
MuSR (0-shot) 9.80
MMLU-PRO (5-shot) 34.39