Safetensors
qwen2

What is the advantage of fine-tuning based on coder-instruct model rather than instruct?

#4
by sangmini - opened

Hello!

I'm curious about why models are trained on top of Qwen/Qwen2.5-Coder-7B-Instruct rather than the standard Qwen/Qwen2.5-7B-Instruct. I'd like to understand if there are references or evidence suggesting that coder-instruct models provide better performance as base models for fine-tuning.

Thanks

Sign up or log in to comment