Qwen2.5-14B-HyperMarck
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Linear DELLA merge method using suayptalha/Lamarckvergence-14B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: della_linear
dtype: float32
out_dtype: bfloat16
parameters:
epsilon: 0.04
lambda: 1.05
normalize: true
base_model: suayptalha/Lamarckvergence-14B
tokenizer_source:
models:
- model: CultriX/MergeStage2v3
parameters:
weight: 18
density: 1.5
- model: CultriX/MergeStage1v3
parameters:
weight: 12
density: 1.25
- Downloads last month
- 50
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for CultriX/Qwen2.5-14B-HyperMarck-dl
Merge model
this model