Overview

One of the merging steps for Tantum. Might be better than the end result

Model files may not be downloadable

You can get full-weight files from here: https://huggingface.co/mergekit-community/MS-RP-whole

This happened because I was using the mergekit-gui space for merging and got lazy about manually dragging the intermediate steps to my org, so I just set it to upload to mergekit-community. When I learned that this thing was usable on it's own, I decided to add some info to the model card and duplicated the repo here before linking it in the Tantum readme file.

yeah

Settings:

Samplers: Weird preset | Forgotten-Safeword preset

Prompt format: Mistral-V7-Tekken (?)

I use this lorebook for all chats instead of a system prompt for mistal models.

Quants

Static | Imatrix


Merge Details

Merging steps

MS3-test-Merge-1

models:
  - model: unsloth/Mistral-Small-24B-Base-2501
  - model: unsloth/Mistral-Small-24B-Instruct-2501+ToastyPigeon/new-ms-rp-test-ws
    parameters:
        select_topk:
          - value: [0.05, 0.03, 0.02, 0.02, 0.01]
  - model: unsloth/Mistral-Small-24B-Instruct-2501+estrogen/MS2501-24b-Ink-ep2-adpt
    parameters:
        select_topk: 0.1
  - model: trashpanda-org/MS-24B-Instruct-Mullein-v0
    parameters:
        select_topk: 0.4
base_model: unsloth/Mistral-Small-24B-Base-2501
merge_method: sce
parameters:
  int8_mask: true
  rescale: true
  normalize: true
dtype: bfloat16
tokenizer_source: base
dtype: bfloat16
tokenizer_source: base
merge_method: della_linear
parameters:
  density: 0.55
base_model: Step1
models:
  - model: unsloth/Mistral-Small-24B-Instruct-2501
    parameters:
      weight:
        - filter: v_proj
          value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
        - filter: o_proj
          value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1]
        - filter: up_proj
          value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
        - filter: gate_proj
          value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
        - filter: down_proj
          value: [1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
        - value: 0
  - model: Step1
    parameters:
      weight:
        - filter: v_proj
          value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
        - filter: o_proj
          value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0]
        - filter: up_proj
          value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        - filter: gate_proj
          value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
        - filter: down_proj
          value: [0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1]
        - value: 1

Some early MS3 merge. Not really worth using on its own. Just added it for fun.

RP-half1

models:
  - model: ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4
    parameters:
      weight: 0.2
      density: 0.7
  - model: trashpanda-org/Llama3-24B-Mullein-v1
    parameters:
      weight: 0.2
      density: 0.7
  - model: TheDrummer/Cydonia-24B-v2
    parameters:
      weight: 0.2
      density: 0.7
merge_method: della_linear
base_model: Nohobby/MS3-test-Merge-1
parameters:
  epsilon: 0.2
  lambda: 1.1
dtype: bfloat16
tokenizer:
 source: base

RP-half2

base_model: Nohobby/MS3-test-Merge-1
parameters:
  epsilon: 0.05
  lambda: 0.9
  int8_mask: true
  rescale: true
  normalize: false
dtype: bfloat16
tokenizer:
 source: base
merge_method: della
models:
  - model: estrogen/MS2501-24b-Ink-apollo-ep2
    parameters:
      weight: [0.1, -0.01, 0.1, -0.02, 0.1]
      density: [0.6, 0.4, 0.5, 0.4, 0.6]
  - model: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
    parameters:
      weight: [0.02, -0.01, 0.02, -0.02, 0.01]
      density: [0.45, 0.55, 0.45, 0.55, 0.45]
  - model: ToastyPigeon/ms3-roselily-rp-v2
    parameters:
      weight: [0.01, -0.02, 0.02, -0.025, 0.01]
      density: [0.45, 0.65, 0.45, 0.65, 0.45]
  - model: PocketDoc/Dans-DangerousWinds-V1.1.1-24b
    parameters:
      weight: [0.1, -0.01, 0.1, -0.02, 0.1]
      density: [0.6, 0.4, 0.5, 0.4, 0.6]

RP-broth/MS-RP-whole

base_model: ReadyArt/Forgotten-Safeword-24B-V2.2
merge_method: model_stock
dtype: bfloat16
models:
  - model: mergekit-community/MS3-RP-half1
  - model: mergekit-community/MS3-RP-RP-half2
Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for d-rang-d/MS3-RP-Broth-24B