File size: 1,165 Bytes
fb9bf1d 236a5c1 fb9bf1d 236a5c1 a6d2a3c 044f1ea 4858e2f 7dcf61c 4858e2f 662172b f65bbcc 4858e2f 67a6497 f65bbcc 4858e2f f65bbcc 4858e2f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
license: other
---
See LICENSE file for license.
This is a collection of merged, then converted to 4bit LLaMA models trained on the storytelling dataset I used for the storytelling LoRAs.
UPDATE: 04/04
Cleaned data and retrained to 32 groupsize and safetensors. Formatting oddities seem to have been wiped out.
Format: Nothing notable, chapters separated by *** therefore may mess some things up.
UPDATE: 2024-04-18
Retrained and merged using updated LoRAs.
To merge and convert, used:
```
transformers 4.28.1.
gptq cuda branch 5731aa1
llamacpp master branch 8944a13
```
Notes for usage.
```
- These models are not instruct LoRAs. They are designed to supplement existing story data.
- There will likely be some bleedthrough on locations and names, this is especially notable if you use with very little context.
- There isn't any large notable formatting, ### seperated stories in the dataset, and *** seperated chapters.
```
Currently transferring models over.
```
7B safetensors 4bit - UPLOADED
7B ggml 4bit - UPLOADED
13B safetensors 4bit - UPLOADED
13B ggml 4bit - WAITING ON UPLOAD
30B safetensors 4bit -
30B ggml 4bit - WAITING ON UPLOAD
```
|