Curious as to how this was trained

#2
by clayshoaf - opened

Could you give a little more information on how you trained this? What method did you use and how did you break up the data? Did you just break it up into 2048 token sized chunks, or was there any kind of question/answer data included in the dataset?

I used this. https://github.com/GamerUntouch/minimal-llama
I made a guide here. https://rentry.org/LLAMALORA
The latest text-generation-webui has an option to train LoRAs too.

I just took some lovecraft scraped data and cleaned it up, separating stories with ***, wasn't anything more than that.

Sign up or log in to comment