--- base_model: - sometimesanotion/Lamarck-14B-v0.7 - sometimesanotion/LoRA-256-Base-Qwenvergence library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the Passthrough merge method using [sometimesanotion/Lamarck-14B-v0.7](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7) + [sometimesanotion/LoRA-256-Base-Qwenvergence](https://huggingface.co/sometimesanotion/LoRA-256-Base-Qwenvergence) as a base. ### Models Merged The following models were included in the merge: ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: sometimesanotion/Lamarck-14B-v0.7+sometimesanotion/LoRA-256-Base-Qwenvergence dtype: float16 merge_method: passthrough models: - model: sometimesanotion/Lamarck-14B-v0.7+sometimesanotion/LoRA-256-Base-Qwenvergence ```