--- base_model: - SanjiWatsuki/Kunoichi-7B - SanjiWatsuki/Kunoichi-DPO-v2-7B library_name: transformers tags: - mergekit - merge --- # output-model-directory This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: SanjiWatsuki/Kunoichi-7B layer_range: [0,32] - model: SanjiWatsuki/Kunoichi-DPO-v2-7B layer_range: [0,32] merge_method: slerp base_model: SanjiWatsuki/Kunoichi-7B parameters: t: - value: 0.5 dtype: float16 ```