Miqu PlayMaid 70B v0.1 - EXL2-8BPW
This model uses the Alpaca prompting format. Unlike the base model this one is tuned on 16384 sequence length. This was due to memory constraints.
This is a roleplay focused finetune using MiquMaid v2 70B DPO by Undi95 and IkariDev as base.
My goal was to further push MiquMaid into creative writing and roleplay, hopefully adding some spice to it. Ultimately I want to follow Undi's and Ikari's approach and combine it into a 2x70B alongside their Miqu 70B DPO model.
Credits:
- Netrve
- Special thanks to everyone on TheBloke's Discord who helped me!
Credits for MiquMaid:
- Undi
- IkariDev
Description
This repo contains the EXL2-8BPW quantized variant of Miqu-PlayMaid-70B-v0.1
Training data used:
Format:
I can only guarantee that it works with a maximum context length of 16384. Might work with 32768, but no promises.
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
Support
- Netrve: If you like what I did, feel free to support future work here
Don't forget to send some love to the original masterminds behind MiquMaid:
- Undi: If you want to support us, you can here
- IkariDev: Visit my retro/neocities style website please kek
- Downloads last month
- 6