Casual-Autopsy's picture
Update README.md
8a337bd verified
|
raw
history blame
13.1 kB
---
base_model:
- Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
- Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
- tannedbum/L3-Nymeria-Maid-8B
- bluuwhale/L3-SthenoMaidBlackroot-8B-V1
- tannedbum/L3-Nymeria-8B
- Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
- Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
- migtissera/Llama-3-8B-Synthia-v3.5
- Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
- v000000/L3-8B-Poppy-Sunspice
- Magpie-Align/Llama-3-8B-WizardLM-196K
- Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
- Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- invisietch/EtherealRainbow-v0.3-8B
- crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
- aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
- ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
- Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- Casual-Autopsy/Umbral-Mind-6
- ResplendentAI/Nymph_8B
library_name: transformers
tags:
- mergekit
- merge
---
<img src="https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3-8B/resolve/main/63073798_p0_master1200.jpg" style="display: block; margin: auto;">
Image by ろ47
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
- Mental illness
- Self-harm
- Trauma
- Suicide
I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes,
but thanks to [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) this problem has been lessened considerably.
If you're an enjoyer of savior/reverse savior type role-plays like myself, then this model is for you.
### Usage Info
This model is meant to be used with asterisks/quotes RPing formats, any other format that isn't asterisks/quotes is likely to cause issues
### Quants
* Weighted GGUFs by [mradermacher](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v3.0-8B-i1-GGUF)
* Static GGUFs by [mradermacher](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v3.0-8B-GGUF)
### Models Merged
The following models were included in the merge:
* [Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B)
* [Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B)
* [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B)
* [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
* [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B)
* [Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B)
* [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
* [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2)
* [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
* [Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B)
* [v000000/L3-8B-Poppy-Sunspice](https://huggingface.co/v000000/L3-8B-Poppy-Sunspice)
* [Magpie-Align/Llama-3-8B-WizardLM-196K](https://huggingface.co/Magpie-Align/Llama-3-8B-WizardLM-196K)
* [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B)
* [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B)
* [invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B)
* [crestf411/L3-8B-sunfall-v0.4-stheno-v3.2](https://huggingface.co/crestf411/L3-8B-sunfall-v0.4-stheno-v3.2)
* [aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K)
* [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B)
* [Nitral-AI/Hathor_Tahsin-L3-8B-v0.85](https://huggingface.co/Nitral-AI/Hathor_Tahsin-L3-8B-v0.85)
* [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
## Secret Sauce
The following YAML configurations were used to produce this model:
### Umbral-Mind-1-pt.1
```yaml
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
- model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: tannedbum/L3-Nymeria-Maid-8B
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: tannedbum/L3-Nymeria-8B
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
### Umbral-Mind-1-pt.2
```yaml
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
- model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: tannedbum/L3-Nymeria-Maid-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: tannedbum/L3-Nymeria-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
### Umbral-Mind-1
```yaml
models:
- model: Casual-Autopsy/Umbral-Mind-1-pt.1
- model: Casual-Autopsy/Umbral-Mind-1-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-1-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
```
### Umbral-Mind-2-pt.1
```yaml
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: migtissera/Llama-3-8B-Synthia-v3.5
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: v000000/L3-8B-Poppy-Sunspice
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
### Umbral-Mind-2-pt.2
```yaml
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
- model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: migtissera/Llama-3-8B-Synthia-v3.5
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-WizardLM-196K
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
### Umbral-Mind-2
```yaml
models:
- model: Casual-Autopsy/Umbral-Mind-2-pt.1
- model: Casual-Autopsy/Umbral-Mind-2-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-2-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
```
### Umbral-Mind-3-pt.1
```yaml
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: invisietch/EtherealRainbow-v0.3-8B
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
### Umbral-Mind-3-pt.2
```yaml
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: invisietch/EtherealRainbow-v0.3-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
### Umbral-Mind-3
```yaml
models:
- model: Casual-Autopsy/Umbral-Mind-3-pt.1
- model: Casual-Autopsy/Umbral-Mind-3-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-3-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
```
### Umbral-Mind-4
```yaml
models:
- model: Casual-Autopsy/Umbral-Mind-1
- model: Casual-Autopsy/Umbral-Mind-3
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-1
parameters:
t:
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
dtype: bfloat16
```
### Umbral-Mind-5
```yaml
models:
- model: Casual-Autopsy/Umbral-Mind-4
- model: Casual-Autopsy/Umbral-Mind-2
merge_method: slerp
base_model: Casual-Autopsy/Umbral-Mind-4
parameters:
t:
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
embed_slerp: true
dtype: bfloat16
```
### Umbral-Mind-6
```yaml
models:
- model: mergekit-community/Umbral-Mind-5
- model: Casual-Autopsy/Mopey-Omelette
merge_method: slerp
base_model: mergekit-community/Umbral-Mind-5
parameters:
t:
- value: [0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2]
embed_slerp: true
dtype: bfloat16
```
### Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
```yaml
models:
- model: Casual-Autopsy/Umbral-Mind-6
- model: aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
parameters:
weight: [0.02, -0.01, -0.01, 0.02]
- model: ResplendentAI/Nymph_8B
parameters:
weight: [-0.01, 0.02, 0.02, -0.01]
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
parameters:
weight: [-0.01, 0.02, 0.02, -0.01]
- model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
parameters:
weight: [0.02, -0.01, -0.01, 0.02]
merge_method: task_arithmetic
base_model: Casual-Autopsy/Umbral-Mind-6
parameters:
normalize: false
dtype: bfloat16
```