ZEUS-8B-V30 / README.md
T145's picture
Adding Evaluation Results
4689bd8 verified
|
raw
history blame
5.36 kB
metadata
base_model:
  - arcee-ai/Llama-3.1-SuperNova-Lite
  - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
  - T145/KRONOS-8B-V1-P1
  - unsloth/Llama-3.1-Storm-8B
  - VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
library_name: transformers
license: llama3.1
tags:
  - mergekit
  - merge
  - llama-3.1
  - llama
  - instruct
model-index:
  - name: ZEUS-8B-V30
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: wis-k/instruction-following-eval
          split: train
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 74.36
            name: averaged accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V30
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: SaylorTwift/bbh
          split: test
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 32.19
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V30
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: lighteval/MATH-Hard
          split: test
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 14.43
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V30
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          split: train
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 9.4
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V30
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 10.07
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V30
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 32.71
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FZEUS-8B-V30
          name: Open LLM Leaderboard

ZEUS 8B V30

This model is a merge of the following pre-trained and finetuned LLMs, created using mergekit.

Merge Configuration

The following YAML configuration was used to produce this model:

base_model: T145/KRONOS-8B-V1-P1
dtype: bfloat16
merge_method: dare_ties
name: ZEUS-8B-V30
parameters:
  int8_mask: 1.0
  normalize: 1.0
  random_seed: 145
slices:
- sources:
  - layer_range: [0, 32]
    model: unsloth/Llama-3.1-Storm-8B
    parameters:
      density: 0.94
      weight: 0.35
  - layer_range: [0, 32]
    model: arcee-ai/Llama-3.1-SuperNova-Lite
    parameters:
      density: 0.92
      weight: 0.26
  - layer_range: [0, 32]
    model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
    parameters:
      density: 0.91
      weight: 0.2
  - layer_range: [0, 32]
    model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
    parameters:
      density: 0.93
      weight: 0.19
  - layer_range: [0, 32]
    model: T145/KRONOS-8B-V1-P1
tokenizer:
  source: union
  tokens:
    <|begin_of_text|>:
      force: true
      source: T145/KRONOS-8B-V1-P1
    <|eot_id|>:
      force: true
      source: T145/KRONOS-8B-V1-P1

Open LLM Leaderboard Evaluation Results

Detailed results can be found here! Summarized results can be found here!

Metric Value (%)
Average 28.86
IFEval (0-Shot) 74.36
BBH (3-Shot) 32.19
MATH Lvl 5 (4-Shot) 14.43
GPQA (0-shot) 9.40
MuSR (0-shot) 10.07
MMLU-PRO (5-shot) 32.71