manticore guanaco is good still, so trying the same with llama 2 I guess, just applied a lora on MysticGem1.3
Thanks mradermacher for the quants!
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the linear merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: v000000/MysticGem-v1.3-L2-13B+Mikael110/llama-2-13b-guanaco-qlora
parameters:
weight: 1.0
merge_method: linear
dtype: bfloat16
Prompt Format (Alpaca):
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Take the role of {{char}} in a play where you leave a lasting impression on {{user}}. Never skip or gloss over {{char}}'s actions.
### Instruction:
{prompt}
### Response:
{output}
Prompt Format (Metharme):
<|system|>Take the role of {{char}} in a play where you leave a lasting impression on {{user}}. Never skip or gloss over {{char}}'s actions.
<|user|>{{user}}: {prompt}<|model|>{{char}}: {output}
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for v000000/MysticGem-v1.3-Guanaco-L2-13B
Merge model
this model