Merge of OpenHermes and Dolphin with 2x Noromaid DPO, trying to add a little more brain in the model, while being smaller than a 8x7b.

It seems to work well.

Description

This repo contains fp16 files of OpenDolphinMaid-4x7b.

Models and LoRA used

  • NeverSleep/Noromaid-7B-0.4-DPO x 2
  • teknium/OpenHermes-2.5-Mistral-7B
  • cognitivecomputations/dolphin-2.6-mistral-7b-dpo

Prompt template: Chatml

<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

If you want to support me, you can here.

Downloads last month
50
Safetensors
Model size
24.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Undi95/OpenDolphinMaid-4x7b

Quantizations
2 models