base_model: | |
- Qwen/Qwen2.5-14B-Instruct-1M | |
A simple unalignment fine-tune on ~900k tokens aiming to make the model more compliant and willing to handle user requests. | |
This is the same unalignment training seen in [concedo/Beepo-22B](https://huggingface.co/concedo/Beepo-22B), so big thanks to concedo for the dataset. | |
Chat template is same as the original, ChatML. |