MysticGem-v1.3-L2-13B l2-test-001
RP Model, pretty good result!
Probably final. Smart, novel, lewd etc etc.
Rank no.1 chaiverse for 13B
v000000/MysticGem-v1.3-L2-13B-Q4_K_M-GGUF
This model was converted to GGUF format from v000000/MysticGem-v1.3-L2-13B
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the linear merge method.
Models Merged
The following models were included in the merge:
- KoboldAI/LLaMA2-13B-Erebus-v3
- Locutusque/Orca-2-13b-SFT-v4
- Sao10K/Stheno-Inverted-1.2-L2-13B
- Walmart-the-bag/MysticFusion-13B
- Undi95/Amethyst-13B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Undi95/Amethyst-13B
parameters:
weight: 0.3
- model: Walmart-the-bag/MysticFusion-13B
parameters:
weight: 0.35
- model: Sao10K/Stheno-Inverted-1.2-L2-13B
parameters:
weight: 0.15
- model: KoboldAI/LLaMA2-13B-Erebus-v3
parameters:
weight: 0.1
- model: Locutusque/Orca-2-13b-SFT-v4
parameters:
weight: 0.1
merge_method: linear
dtype: bfloat16
Prompt Format (Alpaca):
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Take the role of {{char}} in a play where you leave a lasting impression on {{user}}. Never skip or gloss over {{char}}'s actions.
### Instruction:
{prompt}
### Response:
{output}
- Downloads last month
- 1
Model tree for v000000/MysticGem-v1.3-L2-13B-Q4_K_M-GGUF
Base model
v000000/MysticGem-v1.3-L2-13B