--- license: apache-2.0 base_model: - lodrick-the-lafted/Olethros-8B - lodrick-the-lafted/Limon-8B - lodrick-the-lafted/Rummage-8B - cgato/L3-TheSpice-8b-v0.8.3 - unsloth/llama-3-8b-Instruct - Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total library_name: transformers tags: - mergekit - merge --- Kudzu-8B Fresh out of the mergekit-evolve kitchen, this is a merge model between: * [lodrick-the-lafted/Olethros-8B](https://huggingface.co/lodrick-the-lafted/Olethros-8B) * [lodrick-the-lafted/Limon-8B](https://huggingface.co/lodrick-the-lafted/Limon-8B) * [lodrick-the-lafted/Rummage-8B](https://huggingface.co/lodrick-the-lafted/Rummage-8B) * [Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total](https://huggingface.co/Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total) * [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3) Used wmdp as the scoring method for evolve. In my limited testing, it has not done the usual Llama-3 "Ahaha!" interjections while retaining a good portion of the intelligence. There are several ablated models in the mix so don't be surprised if it gives you what you ask for.