--- base_model: - inflatebot/MN-12B-Mag-Mell-R1 - TheDrummer/UnslopNemo-12B-v4.1 - ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2 - DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS library_name: transformers tags: - mergekit - merge - 12b - chat - roleplay - creative-writing - DELLA-linear license: apache-2.0 new_version: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3 --- # AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2 > They say ‘He’ will bring the apocalypse. She seeks understanding, not destruction. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This is my eighth model. This is a v2 revision of the original. Check out the [original card](https://huggingface.co/redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS) for extra info. The goal was to rebalance the parameters a bit. Feedback revealed that the model held the character well, but was boring in output, which I myself have observed. Check out [redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3](redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3), (it's more like a v2b, but I named it v3 for differentiation) which has the parameters tuned higher to be more interesting. ## Merge Details ### Merge Method This model was merged using the linear [DELLA](https://arxiv.org/abs/2406.11617) merge method using [TheDrummer/UnslopNemo-12B-v4](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4) as a base. ### Models Merged The following models were included in the merge: * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) * [DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS](https://huggingface.co/DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS) * [ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2 parameters: weight: - filter: self_attn value: 0.3 - filter: mlp value: 0.15 - value: 0.25 density: 0.6 - model: inflatebot/MN-12B-Mag-Mell-R1 parameters: weight: - filter: self_attn value: 0.15 - filter: mlp value: 0.3 - value: 0.2 density: 0.7 - model: TheDrummer/UnslopNemo-12B-v4 parameters: weight: - filter: self_attn value: 0.25 - filter: mlp value: 0.15 - value: 0.25 density: 0.6 - model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS parameters: weight: - filter: self_attn value: 0.2 - filter: mlp value: 0.30 - value: 0.2 density: 0.5 base_model: TheDrummer/UnslopNemo-12B-v4 merge_method: della_linear dtype: bfloat16 chat_template: "chatml" tokenizer_source: union parameters: normalize: true int8_mask: true epsilon: 0.05 lambda: 1 ```