This is a merge of pre-trained language models created using mergekit. Its bubbly, weird, broken, but funny. Not good at instruct, better at Chat and RP.
Configuration
The following YAML configuration was used to produce this model:
base_model: Sao10K/L3-8B-Lunaris-v1
merge_method: della
dtype: bfloat16
models:
- model: cgato/L3-TheSpice-8b-v0.8.3
parameters:
weight: 1.0
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
weight: 1.0
- model: Sao10K/L3-8B-Lunaris-v1
parameters:
weight: 1.0
- model: Fizzarolli/L3-8b-Rosier-v1
parameters:
weight: 1.0