MS-Meadowlark-22B / README.md
ToastyPigeon's picture
Upload folder using huggingface_hub
058e689 verified
|
raw
history blame
2.11 kB
metadata
base_model:
  - unsloth/Mistral-Small-Instruct-2409
  - ToastyPigeon/mistral-small-springdragon-qlora
  - unsloth/Mistral-Small-Instruct-2409
  - unsloth/Mistral-Small-Instruct-2409
  - Alfitaria/mistral-small-fujin-qlora
  - nbeerbower/Mistral-Small-Gutenberg-Doppel-22B
library_name: transformers
tags:
  - mergekit
  - merge

ms-literarier-creativeflowerventure

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using unsloth/Mistral-Small-Instruct-2409 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: unsloth/Mistral-Small-Instruct-2409
merge_method: task_arithmetic
slices:
- sources:
  - layer_range: [0, 56]
    model: output/ms-creative
    parameters:
      weight: 0.3
  - layer_range: [0, 56]
    model: nbeerbower/Mistral-Small-Gutenberg-Doppel-22B
    parameters:
      weight: 0.6
  - layer_range: [0, 56]
    model: unsloth/Mistral-Small-Instruct-2409+Alfitaria/mistral-small-fujin-qlora
    parameters:
      weight: 0.4
  - layer_range: [0, 56]
    model: unsloth/Mistral-Small-Instruct-2409+ToastyPigeon/mistral-small-springdragon-qlora
    parameters:
      weight: 0.1
  - layer_range: [0, 56]
    model: unsloth/Mistral-Small-Instruct-2409