--- base_model: - flemmingmiguel/MBX-7B-v3 - paulml/NeuralOmniWestBeaglake-7B - FelixChao/Faraday-7B - paulml/NeuralOmniBeagleMBX-v3-7B tags: - mergekit - merge license: apache-2.0 language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/eDLmpTkM4vuk8HiQcUzWv.png) # To see what will happen. [Join our Discord!](https://discord.gg/aEGuFph9) [GGUF FILES HERE](https://huggingface.co/Kquant03/Samlagast-7B-GGUF) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [paulml/NeuralOmniBeagleMBX-v3-7B](https://huggingface.co/paulml/NeuralOmniBeagleMBX-v3-7B) as a base. ### Models Merged The following models were included in the merge: * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) * [paulml/NeuralOmniWestBeaglake-7B](https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B) * [FelixChao/Faraday-7B](https://huggingface.co/FelixChao/Faraday-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: paulml/NeuralOmniWestBeaglake-7B parameters: weight: 1 - model: FelixChao/Faraday-7B parameters: weight: 1 - model: flemmingmiguel/MBX-7B-v3 parameters: weight: 1 - model: paulml/NeuralOmniBeagleMBX-v3-7B parameters: weight: 1 merge_method: task_arithmetic base_model: paulml/NeuralOmniBeagleMBX-v3-7B parameters: normalize: true int8_mask: true dtype: float16 ```