--- base_model: - Qwen/Qwen2.5-14B - Qwen/Qwen2.5-14B-Instruct library_name: transformers tags: - mergekit - merge --- A Fishy Model qwen-carpmuscle-r-v0.3 was made using Rombodawg's Shared Continuous Finetuning method. qwen-carpmuscle-v0.3 was made using Unsloth's continuous pretraining on the ChatML format with 24k context on (unsloth/Qwen2.5-14B-bnb-4bit)[https://huggingface.co/unsloth/Qwen2.5-14B-bnb-4bit]. Then qwen-carpmuscle-v0.3 was merged with [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) and [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) using [TIES](https://arxiv.org/abs/2306.01708) to create this model. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: qwen-carpmuscle-v0.3 parameters: weight: 1 density: 1 - model: Qwen/Qwen2.5-14B-Instruct parameters: weight: 1 density: 1 merge_method: ties base_model: Qwen/Qwen2.5-14B parameters: weight: 1 density: 1 normalize: true int8_mask: true tokenizer_source: qwen-carpmuscle-v0.3 dtype: bfloat16 ``` - **Developed by:** TheTsar1209 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-14B-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)