--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - DreadPoor/Aurora_faustus-8B-LINEAR - bunnycore/Llama-3.1-8B-TitanFusion-Mix - Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base - nvidia/OpenMath2-Llama3.1-8B - vicgalle/Configurable-Llama-3.1-8B-Instruct --- # Best-Mix-Llama-3.1-8B This is a test merge with all my favorite models Best-Mix-Llama-3.1-8B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [DreadPoor/Aurora_faustus-8B-LINEAR](https://huggingface.co/DreadPoor/Aurora_faustus-8B-LINEAR) * [bunnycore/Llama-3.1-8B-TitanFusion-Mix](https://huggingface.co/bunnycore/Llama-3.1-8B-TitanFusion-Mix) * [Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base](https://huggingface.co/Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base) * [nvidia/OpenMath2-Llama3.1-8B](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B) * [vicgalle/Configurable-Llama-3.1-8B-Instruct](https://huggingface.co/vicgalle/Configurable-Llama-3.1-8B-Instruct) ## 🧩 Configuration ```yaml models: - model: DreadPoor/Aurora_faustus-8B-LINEAR parameters: density: 0.8 weight: 0.8 - model: bunnycore/Llama-3.1-8B-TitanFusion-Mix parameters: density: 0.8 weight: 0.8 - model: Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base parameters: density: 0.8 weight: 0.8 - model: nvidia/OpenMath2-Llama3.1-8B parameters: density: 0.8 weight: 0.8 - model: vicgalle/Configurable-Llama-3.1-8B-Instruct parameters: density: 0.8 weight: 0.8 merge_method: ties base_model: bunnycore/Llama-3.1-8B-TitanFusion-Mix parameters: normalize: false int8_mask: true dtype: float16 ```