File size: 3,210 Bytes
39988af b2d9655 67f730f 39988af 67f730f b2d9655 67f730f b2d9655 67f730f b2d9655 39988af |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
Merge of [SuperHOT-LoRA-prototype](https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype) and [llama-30b](https://huggingface.co/huggyllama/llama-30b)
Llama30B-SuperHOT-4bit-128g.safetensors Quantization:
```
CUDA_VISIBLE_DEVICES=0 python llama.py ausboss/Llama30B-SuperHOT c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors Llama30B-SuperHOT-4bit-128g.safetensors
```
Llama30B-SuperHOT-4bit.safetensors Quantization:
```
CUDA_VISIBLE_DEVICES=0 python llama.py ausboss/Llama30B-SuperHOT c4 --wbits 4 --true-sequential --save_safetensors Llama30B-SuperHOT-4bit.safetensors
```
# From the SuperHot Page:
## Prototypes for SuperHOT
No guarantees for output quality, simply uploading what I have so others can play around with it. Not even sure if the rank in cutoff-8192 is correct (think it should be 10 maybe.. can't remember)
All prototypes are extremely early epochs (sub 0.5)
## Model/Training
All trained with Flash Attention with conversation sequence lengths ranging from 8K to 16K tokens (No Alibi unless otherwise mentioned)
All trained on LLaMa 13B 4-bit (no groupsize)
(*Personally, I like the 8K cutoff version better, so I would say start with that one*)
## Data
A combination of various datasets and cleaned logs converted into datasets including but not limited to:
- Bluemoon Fanbased
- Roleplaying Guild
- Community-sourced outputs
- [Dan's PocketDoc/RUCAIBox-Story-Generation-Alpaca](https://huggingface.co/datasets/PocketDoc/RUCAIBox-Story-Generation-Alpaca)
- [IlyaGusev/gpt_roleplay_realm](https://huggingface.co/datasets/IlyaGusev/gpt_roleplay_realm)
- others
## Bias
SuperHOT is a fiction-focused model. No alignment has been performed on the training data. Be mindful that this model may output harmful, violent, or otherwise problematic content
## Format
Any format should work with such early checkpoints. However the training data is entirely in the following format:
```
---
mode: chat
characters:
<char1 name>: <descriptive tags for char1>
<char2 name>: <descriptive tags for char2>
summary: <summary of the story thus far or the purpose of the chat> (optional)
<any other miscellaneous data>
---
<chat history>
```
By "any other miscellaneous data", it means you should be able to put any additional metadata for the story or characters. I.e.,
```
...
locations:
location1: <tags for location1>
inventory:
item1: <tags for item1>
```
Again, format does not hold such a large weight on these early checkpoints. I have found success with the following setup for an RPG-like experience. Just play around with the format and see what works:
```
---
mode: rpg
characters:
You: a new player
system: The system controls the RPG, handles character creation, world narration, and quest management. Also controls any NPCs and inventory tracking. Their first message provides a lengthy introduction for the player into the RPG world they are about to play in. After completing the character creation, the system will give a lengthy introduction into the world of ___. The first quest will follow right after
rpg setting: The world of ___
rpg rules: Any rules typical of RPG games, including typical items, battle stats, etc
---
```
|