kuno-kunoichi-v1-DPO-v2-SLERP-7B-8.0bpw_h8_exl2

This is an 8.0bpw exl2 quant of a merge of pre-trained language models created using mergekit. Full weights are here. Q8_0 GGUF quant is here.

Light testing was performed with ChatML format prompting using temperature 1 to 1.1 and minP 0.01 to 0.03. The model natively supports Alpaca format prompts.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: SanjiWatsuki/Kunoichi-7B
      layer_range: [0,32]
    - model: SanjiWatsuki/Kunoichi-DPO-v2-7B
      layer_range: [0,32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-7B
parameters:
  t:
    - value: 0.5
dtype: float16
Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B-8.0bpw_h8_exl2

Finetuned
(1)
this model

Collection including grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B-8.0bpw_h8_exl2