Aeonis-20b-GGUF / README.md
athirdpath's picture
Update README.md
d1c9a9c verified
metadata
tags:
  - not-for-all-audiences
license: apache-2.0

Aeonis-20b-GGUF

Based on Mistral NeMo.

Trained with Alpaca prompt formatting, Mistral works


Assistant Examples - 8-bit GGUF

(basic Ooba preset, assistant character, and system prompt)


NSFW Writing Example - 8-bit GGUF

Prompt: "Write a detailed, erotic story about a stripper sleeping with her co-worker"

(basic Ooba preset, assistant character, and system prompt)


Compantionship Chat Example - 8-bit GGUF

Using Goldie, one of the top characters on Chub.ai

(basic Ooba preset and system prompt)


Training Methodology

The model was trained on a variation of TheSkullery/NeMoria-21b, made by finetuning two NeMo models, one for each added “core” (set of repeated layers). One model was overfit to RP data, and the other was overfit to factual data and input analysis. Then the base NeMo was stitched together with the two models, so the repeated portion was one vanilla NeMo core, then the “Virgin” core, then the “Slut” core, a series of layers I like to call the “Whore/Madonna complex”. Now in place, the entire model was continually pretrained on ~1.5 GB private dataset of domain data mixed with stabilizing agents. The Virgin and Slut cores were then each instruct trained on their domains with all other layers frozen, one at a time. Finally, the entire model was SFT’d and DPO’d.