image/png Model name: ZYH-LLM-Qwen2.5-14BπŸŽ‰

This model's performance is absolutely phenomenal, surpassing all my previously released merged models! πŸš€

To highlight its uniqueness, I've created a brand new series separate from all previous releases! πŸ’«

πŸ“… Release date: February 5, 2025

🧩 Merging methods: della and sce

πŸ› οΈ Models used:

  • Qwen2.5-Coder-14B
  • Qwen2.5-Coder-14B-instruct
  • Qwen2.5-14B-instruct
  • Qwen2.5-14B-instruct-1M
  • Qwen2.5-14B

✨ Coming soon: GGUF format version!

πŸ“₯ Don't miss out on trying it - stay tuned for download! πŸš¨πŸ’»

Downloads last month
813
GGUF
Model size
14.8B params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for YOYO-AI/ZYH-LLM-Qwen2.5-14B-GGUF

Base model

Qwen/Qwen2.5-14B
Quantized
(3)
this model

Collection including YOYO-AI/ZYH-LLM-Qwen2.5-14B-GGUF