--- base_model: - grimjim/Magnolia-v1-Gemma2-8k-9B - grimjim/Magot-v2-Gemma2-8k-9B library_name: transformers pipeline_tag: text-generation tags: - mergekit - merge license: gemma --- # Magnolia-v2-Gemma2-8k-9B This repo contains a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). In my testing, this model feels smart and attentive to context, balanced between creative and instruction-following. Tested at temp=1 and minP=0.01. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [grimjim/Magnolia-v1-Gemma2-8k-9B](https://huggingface.co/grimjim/Magnolia-v1-Gemma2-8k-9B) * [grimjim/Magot-v2-Gemma2-8k-9B](https://huggingface.co/grimjim/Magot-v2-Gemma2-8k-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: grimjim/Magot-v2-Gemma2-8k-9B - model: grimjim/Magnolia-v1-Gemma2-8k-9B merge_method: slerp base_model: grimjim/Magot-v2-Gemma2-8k-9B parameters: t: - value: 0.2 dtype: bfloat16 ```