metadata
base_model:
- sometimesanotion/Lamarck-14B-v0.7-Fusion
- sometimesanotion/Qwenvergence-14B-v11
- prithivMLmods/Messier-Opus-14B-Elite7
- jpacifico/Chocolatine-2-14B-Instruct-v2.0b3
- prithivMLmods/Equuleus-Opus-14B-Exp
- Sakalti/Saka-14B
- Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8 as a base.
Models Merged
The following models were included in the merge:
- sometimesanotion/Lamarck-14B-v0.7-Fusion
- sometimesanotion/Qwenvergence-14B-v11
- prithivMLmods/Messier-Opus-14B-Elite7
- jpacifico/Chocolatine-2-14B-Instruct-v2.0b3
- prithivMLmods/Equuleus-Opus-14B-Exp
- Sakalti/Saka-14B
Configuration
The following YAML configuration was used to produce this model:
name: NQLSG-Qwen2.5-14B-MegaFusion-v8.7
models:
- model: jpacifico/Chocolatine-2-14B-Instruct-v2.0b3
- model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8
- model: prithivMLmods/Equuleus-Opus-14B-Exp
- model: prithivMLmods/Messier-Opus-14B-Elite7
- model: Sakalti/Saka-14B
- model: sometimesanotion/Lamarck-14B-v0.7-Fusion
- model: sometimesanotion/Qwenvergence-14B-v11
base_model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8
chat_template: auto
dtype: bfloat16
merge_method: model_stock
parameters:
int8_mask: true
tokenizer:
source: union