merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using winglian/llama-3-8b-1m-PoSE as a base.
Models Merged
The following models were included in the merge:
- jondurbin/bagel-8b-v1.0
- aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K
- Deev124/hermes-llama3-roleplay-4000-v1
- vicgalle/Roleplay-Llama-3-8B
- DevsDoCode/LLama-3-8b-Uncensored
- vicgalle/Unsafe-Llama-3-8B
- Gryphe/Pantheon-RP-1.0-8b-Llama-3
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- NeverSleep/Llama-3-Lumimaid-8B-v0.1
- Undi95/Llama-3-LewdPlay-8B-evo
- vicgalle/Humanish-Roleplay-Llama-3.1-8B
- mergekit-community/because_im_bored_nsfw1
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated
Configuration
The following YAML configuration was used to produce this model:
models:
- model: aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K
parameters:
density: 0.65
weight: 0.15
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
density: 0.70
weight: 0.20
- model: mergekit-community/because_im_bored_nsfw1
parameters:
density: 0.60
weight: 0.10
- model: jondurbin/bagel-8b-v1.0
parameters:
density: 0.60
weight: 0.10
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
density: 0.65
weight: 0.15
- model: Undi95/Llama-3-LewdPlay-8B-evo
parameters:
density: 0.75
weight: 0.25
- model: Deev124/hermes-llama3-roleplay-4000-v1
parameters:
density: 0.60
weight: 0.10
- model: DevsDoCode/LLama-3-8b-Uncensored
parameters:
density: 0.60
weight: 0.10
- model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated
parameters:
density: 0.65
weight: 0.15
- model: vicgalle/Roleplay-Llama-3-8B
parameters:
density: 0.70
weight: 0.20
- model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
parameters:
density: 0.70
weight: 0.20
- model: vicgalle/Humanish-Roleplay-Llama-3.1-8B
parameters:
density: 0.70
weight: 0.20
- model: vicgalle/Unsafe-Llama-3-8B
parameters:
density: 0.70
weight: 0.20
- model: winglian/llama-3-8b-1m-PoSE
parameters:
density: 0.70
weight: 0.20
merge_method: task_arithmetic
base_model: winglian/llama-3-8b-1m-PoSE
parameters:
normalize: true
int8_mask: true
dtype: float16
- Downloads last month
- 87
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for mrcuddle/Unbound-Llama3-8B
Merge model
this model