--- base_model: - deepseek-ai/DeepSeek-R1-Distill-Llama-8B - NousResearch/Hermes-3-Llama-3.1-8B - huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated library_name: transformers tags: - mergekit - merge license: llama3.1 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details This is absolute jibberish. highly suggest not using this. ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) as a base. ### Models Merged The following models were included in the merge: * [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) * [huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Hermes-3-Llama-3.1-8B #no parameters necessary for base model - model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated parameters: density: 0.4 weight: 1 - model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B parameters: density: 0.5 weight: 1 merge_method: dare_ties base_model: NousResearch/Hermes-3-Llama-3.1-8B parameters: normalize: false int8_mask: true dtype: float16 ```