--- language: - zh - en tags: - qwen - chat - 中文 model_name: Qwen Chat 14B model_type: qwen pipeline_tag: text-generation quantized_by: about0 --- # Qwen Chat 14B - GGUF Here are the llama.cpp-compatible GGUF converted and/or quantized models for [Qwen 14B Chat](https://huggingface.co/Qwen/Qwen-14B-Chat). ## Explanation of quantization methods
Click to see details Methods: * type-0 (Q4_0, Q5_0, Q8_0) - weights w are obtained from quants q using w = d * q, where d is the block scale. * type-1 (Q4_1, Q5_1) - weights are given by w = d * q + m, where m is the block minimum The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This ends up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. This is exposed via llama.cpp quantization types that define various "quantization mixes" as follows: * LLAMA_FTYPE_MOSTLY_Q2_K - uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. * LLAMA_FTYPE_MOSTLY_Q3_K_S - uses GGML_TYPE_Q3_K for all tensors * LLAMA_FTYPE_MOSTLY_Q3_K_M - uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K * LLAMA_FTYPE_MOSTLY_Q3_K_L - uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K * LLAMA_FTYPE_MOSTLY_Q4_K_S - uses GGML_TYPE_Q4_K for all tensors * LLAMA_FTYPE_MOSTLY_Q4_K_M - uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K * LLAMA_FTYPE_MOSTLY_Q5_K_S - uses GGML_TYPE_Q5_K for all tensors * LLAMA_FTYPE_MOSTLY_Q5_K_M - uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K * LLAMA_FTYPE_MOSTLY_Q6_K- uses 6-bit quantization (GGML_TYPE_Q8_K) for all tensors
## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [qwen-chat-14B-Q2_K.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q2_K.gguf) | Q2_K | 2 | 6.2 GB| 9.1 GB | smallest, significant quality-loss - not recommended for most purposes | | [qwen-chat-14B-Q3_K_S.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q3_K_S.gguf) | Q3_K_S | 3 | 6.5 GB | 9.4 GB | very small, high quality-loss | | [qwen-chat-14B-Q3_K_M.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q3_K_M.gguf) | Q3_K_M | 3 | 7.2 GB | 10.1 GB | very small, high quality-loss | | [qwen-chat-14B-Q3_K_L.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q3_K_L.gguf) | Q3_K_L | 3 | 7.5 GB| 10.4 GB | small, substantial quality-loss | | [qwen-chat-14B-Q4_0.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q4_0.gguf) | Q4_0 | 4 | 7.7 GB| 10.6 GB | legacy; small, very high quality-loss - prefer using Q3_K_L | | [qwen-chat-14B-Q4_1.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q4_1.gguf) | Q4_1 | 4 | 8.4 GB| 11.3 GB | legacy; small, very high quality-loss - prefer using Q4_K_S | | [qwen-chat-14B-Q4_K_S.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q4_K_S.gguf) | Q4_K_S | 4 | 8.0 GB| 10.9 GB | small, greater quality-loss | | [qwen-chat-14B-Q4_K_M.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q4_K_M.gguf) | Q4_K_M | 4 | 8.9 GB| 11.8 GB | medium, balanced quality - recommended | | [qwen-chat-14B-Q5_0.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q5_0.gguf) | Q5_0 | 5 | 9.2 GB| 12.1 GB | legacy; medium, balanced quality - prefer using Q5_K_M | | [qwen-chat-14B-Q5_1.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q5_1.gguf) | Q5_1 | 5 | 10 GB| 12.9 GB | legacy; medium, balanced quality - prefer using Q5_K_M | | [qwen-chat-14B-Q5_K_S.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q5_K_S.gguf) | Q5_K_S | 5 | 9.4 GB | 12.3 GB | large, low quality-loss - recommended | | [qwen-chat-14B-Q5_K_M.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q5_K_M.gguf) | Q5_K_M | 5 | 11 GB | 13.9 GB | large, very low quality-loss - recommended | | [qwen-chat-14B-Q6_K.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q6_K.gguf) | Q6_K | 6 | 12 GB| 14.9 GB | very large, extremely low quality-loss | | [qwen-chat-14B-Q8_0.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-Q8_0.gguf) | Q8_0 | 8 | 15 GB| 17.9 GB | very large, extremely low quality-loss - not recommended | | [qwen-chat-14B-f16.gguf](https://huggingface.co/about0/qwen-chat-GGUF-14B/blob/main/qwen-chat-14B-f16.gguf) | f16 | 16 | 27 GB| 29.9 GB | very large, no quality-loss - not recommended | ### Model Sources - **Repository:** [Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat)