merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the linear DARE merge method using aetherwiing/MN-12B-Starcannon-v3 as a base.
Models Merged
The following models were included in the merge:
- Sao10K/MN-12B-Lyra-v1
- BeaverAI/mistral-doryV2-12b
- cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
Configuration
The following YAML configuration was used to produce this model:
models:
- model: cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
parameters:
weight: 0.25
density: 0.3
- model: BeaverAI/mistral-doryV2-12b
parameters:
weight: 0.25
density: 0.3
- model: aetherwiing/MN-12B-Starcannon-v3
parameters:
weight: 0.25
density: 0.6
- model: Sao10K/MN-12B-Lyra-v1
parameters:
weight: 0.25
density: 0.4
merge_method: dare_linear
base_model: aetherwiing/MN-12B-Starcannon-v3
dtype: bfloat16
- Downloads last month
- 1
Inference API (serverless) is not available, repository is disabled.
Model tree for Frowning/TypeI-12B
Merge model
this model