base_model: /Users/dawn/git/models/Mistral-7B-Instruct-v0.2
    gate_mode: hidden # one of "hidden", "cheap_embed", or "random"
    dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
    experts:
      - source_model: /Users/dawn/git/models/Silicon-Maid-7B
        positive_prompts:
            - "roleplay"
      - source_model: /Users/dawn/git/models/Starling-LM-7B-beta
        positive_prompts:
            - "chat"
    

Open LLM Leaderboard Evaluation Results

Metric Value
Avg. 68.01
AI2 Reasoning Challenge (25-Shot) 67.49
HellaSwag (10-Shot) 84.76
MMLU (5-Shot) 62.62
TruthfulQA (0-shot) 58.93
Winogrande (5-shot) 78.22
GSM8k (5-shot) 56.03
Downloads last month
51
Safetensors
Model size
12.9B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for dawn17/MistarlingMaid-2x7B-base

Quantizations
2 models