--- base_model: - Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL - meditsolutions/Llama-3.2-SUN-1B-chat library_name: transformers tags: - mergekit - merge license: llama3.2 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Arcee Fusion](https://arcee.ai) merge method using [meditsolutions/Llama-3.2-SUN-1B-chat](https://huggingface.co/meditsolutions/Llama-3.2-SUN-1B-chat) as a base. ### Models Merged The following models were included in the merge: * [Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL](https://huggingface.co/Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: arcee_fusion models: - model: Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL parameters: weight: 1.0 base_model: meditsolutions/Llama-3.2-SUN-1B-chat dtype: bfloat16 out_dtype: bfloat16 parameters: int8_mask: true normalize: true rescale: false chat_template: auto tokenizer: source: union ```