metadata
base_model: grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
license: cc-by-nc-4.0
quanted_by: grimjim
pipeline_tag: text-generation
kuno-kunoichi-v1-DPO-v2-SLERP-7B-8.0bpw_h8_exl2
This is an 8.0bpw exl2 quant of a merge of pre-trained language models created using mergekit. Full weights are here. Q8_0 GGUF quant is here.
Light testing was performed with ChatML format prompting using temperature 1 to 1.1 and minP 0.01 to 0.03. The model natively supports Alpaca format prompts.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [0,32]
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0,32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-7B
parameters:
t:
- value: 0.5
dtype: float16