|
--- |
|
license: apache-2.0 |
|
--- |
|
A collection of LoRAs for int8 LLaMA trained on an assortment of literature (approximately 16 MB) for 2 epochs. |
|
|
|
|
|
UPDATE: 2024-04-18 |
|
Retrained using Transformers 4.28.1, on two epochs, with a small amount more data. |
|
|
|
|
|
Notes for usage. |
|
``` |
|
- These models are not instruct LoRAs. They are designed to supplement existing story data. |
|
- There will likely be some bleedthrough on locations and names, this is especially notable if you use with very little context. |
|
- There isn't any large notable formatting, ### seperated stories in the dataset, and *** seperated chapters. |
|
``` |
|
|