Not-For-All-Audiences
nsfw
Exl2 version of Undi95/OpenDolphinMaid-4x7b
branch
7bh8 : 7bpw h8
Using ThePile 0007.parquet as dataset
Quantization settings : python convert.py -i models/Undi95_OpenDolphinMaid-4x7b -o OpenDolphinMaid-4x7b-temp -cf OpenDolphinMaid-4x7b-7bpw-h8-exl2 -c 0007.parquet -l 8192 -b 7 -hb 8 -ml 8192
below this line is original readme
Merge of OpenHermes and Dolphin with 2x Noromaid DPO, trying to add a little more brain in the model, while being smaller than a 8x7b.
It seems to work well.
Description
This repo contains fp16 files of OpenDolphinMaid-4x7b.
Models and LoRA used
- NeverSleep/Noromaid-7B-0.4-DPO x 2
- teknium/OpenHermes-2.5-Mistral-7B
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo
Prompt template: Chatml
<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
If you want to support me, you can here.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.