Kyllima-34B-v1 / README.md
sirmyrrh's picture
Update README.md
d502160 verified
---
base_model:
- TeeZee/Kyllene-34B-v1.1
- Doctor-Shotgun/Nous-Capybara-limarpv3-34B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---
# Kyllima 34B v1
![image/png](Kyllima.png)
## Model Details
This is a simple 50/50 merge of two of my favorite Yi 34B-based models for roleplay and creative writing, created using [mergekit](https://github.com/cg123/mergekit) on [Arcee.ai](https://app.arcee.ai/).
There's a good amount of Nous Capybara 34B in here, some Bagel DPO, Lima RP v3, and other goodness. Less sloppy thanks to Kyllene. 200k context. Uncensored.
Use with the metadata prompt format, Alpaca-LimaRP, or Vicuna.
Recommended settings: 0.8-1 temp, 1.1-1.2 rep pen, 1 top P, 0.05 min P, 40 top K.
Add \</s> to stop strings, and \/n \{{user}} or \[INST] if necessary.
I use a slightly modified version of RisuAI's default system prompt with good results. I suggest adding a couple lines to the system prompt telling the model to write in complete sentences, and NOT to write prompts to itself.
It's sensitive to small changes in settings and to the style/format of your own writing.
The original upload had a broken tokenizer. If you downloaded before 10/2/24, please re-download.
Static GGUF available [here](https://huggingface.co/sirmyrrh/Kyllima-34B-v1-GGUF) or [here](https://huggingface.co/mradermacher/Kyllima-34B-v1-GGUF).
Imatrix GGUF available [here](https://huggingface.co/mradermacher/Kyllima-34B-v1-i1-GGUF). With thanks to [mradermacher](https://huggingface.co/mradermacher/).
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [TeeZee/Kyllene-34B-v1.1](https://huggingface.co/TeeZee/Kyllene-34B-v1.1) as a base.
### Models Merged
The following models were included in the merge:
* [TeeZee/Kyllene-34B-v1.1](https://huggingface.co/TeeZee/Kyllene-34B-v1.1)
* [Doctor-Shotgun/Nous-Capybara-limarpv3-34B](https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: TeeZee/Kyllene-34B-v1.1
chat_template: auto
dtype: float16
merge_method: ties
models:
- model: TeeZee/Kyllene-34B-v1.1
parameters:
density: 0.5
weight: 0.5
- model: Doctor-Shotgun/Nous-Capybara-limarpv3-34B
parameters:
density: 0.5
weight: 0.5
parameters:
embed_slerp: true
int8_mask: true
normalize: false
tokenizer_source: base
```