--- base_model: grimjim/Llama-3-Oasis-v1-OAS-8B library_name: transformers quanted_by: grimjim pipeline_tag: text-generation license: cc-by-nc-4.0 --- # Llama-3-Oasis-v1-OAS-8B-8bpw_h8_exl2 This is an 8bpw exl2 quant of [grimjim/Llama-3-Oasis-v1-OAS-8B](https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B) This model is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). Built with Meta Llama 3. ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base. ### Models Merged The following models were included in the merge: * [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) * [Hastagaras/Halu-OAS-8B-Llama3](https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: mlabonne/NeuralDaredevil-8B-abliterated dtype: bfloat16 merge_method: task_arithmetic slices: - sources: - layer_range: [0, 32] model: mlabonne/NeuralDaredevil-8B-abliterated - layer_range: [0, 32] model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS parameters: weight: 0.3 - layer_range: [0, 32] model: Hastagaras/Halu-OAS-8B-Llama3 parameters: weight: 0.3 ```