CISCai commited on
Commit
b60ff27
·
verified ·
1 Parent(s): 1142221

Added links to full context YaRN-enabled GGUFs

Browse files
Files changed (1) hide show
  1. README.md +13 -11
README.md CHANGED
@@ -33,6 +33,8 @@ Quantization was done with an importance matrix that was trained for ~1M tokens
33
 
34
  Fill-in-Middle tokens are automatically detected and supported as of commit [11ac980](https://github.com/ggerganov/llama.cpp/commit/11ac9800aff532715a5bc7991062c68ba3472e6e), see [example](#simple-llama-cpp-python-example-fill-in-middle-code).
35
 
 
 
36
  <!-- description end -->
37
 
38
 
@@ -86,17 +88,17 @@ Refer to the Provided Files table below to see what files use which methods, and
86
 
87
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
88
  | ---- | ---- | ---- | ---- | ---- | ----- |
89
- | [Qwen2.5-Coder-32B-Instruct.IQ1_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ1_S.gguf) | IQ1_S | 1 | 6.8 GB| 7.8 GB | smallest, significant quality loss |
90
- | [Qwen2.5-Coder-32B-Instruct.IQ1_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ1_M.gguf) | IQ1_M | 1 | 7.4 GB| 8.4 GB | very small, significant quality loss |
91
- | [Qwen2.5-Coder-32B-Instruct.IQ2_XXS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 2 | 8.4 GB| 9.4 GB | very small, high quality loss |
92
- | [Qwen2.5-Coder-32B-Instruct.IQ2_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ2_XS.gguf) | IQ2_XS | 2 | 9.3 GB| 10.3 GB | very small, high quality loss |
93
- | [Qwen2.5-Coder-32B-Instruct.IQ2_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ2_S.gguf) | IQ2_S | 2 | 9.7 GB| 10.7 GB | small, substantial quality loss |
94
- | [Qwen2.5-Coder-32B-Instruct.IQ2_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ2_M.gguf) | IQ2_M | 2 | 10.5 GB| 11.5 GB | small, greater quality loss |
95
- | [Qwen2.5-Coder-32B-Instruct.IQ3_XXS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 3 | 11.9 GB| 12.9 GB | very small, high quality loss |
96
- | [Qwen2.5-Coder-32B-Instruct.IQ3_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3 | 12.8 GB| 13.8 GB | small, substantial quality loss |
97
- | [Qwen2.5-Coder-32B-Instruct.IQ3_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ3_S.gguf) | IQ3_S | 3 | 13.4 GB| 14.4 GB | small, greater quality loss |
98
- | [Qwen2.5-Coder-32B-Instruct.IQ3_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ3_M.gguf) | IQ3_M | 3 | 13.8 GB| 14.8 GB | medium, balanced quality - recommended |
99
- | [Qwen2.5-Coder-32B-Instruct.IQ4_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4 | 16.5 GB| 17.5 GB | small, substantial quality loss |
100
 
101
  Generated importance matrix file: [Qwen2.5-Coder-32B-Instruct.imatrix.dat](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.imatrix.dat)
102
 
 
33
 
34
  Fill-in-Middle tokens are automatically detected and supported as of commit [11ac980](https://github.com/ggerganov/llama.cpp/commit/11ac9800aff532715a5bc7991062c68ba3472e6e), see [example](#simple-llama-cpp-python-example-fill-in-middle-code).
35
 
36
+ **Update January 6th 2025**: Added links to full context YaRN-enabled GGUFs (using [GGUF Editor](https://huggingface.co/spaces/CISCai/gguf-editor)).
37
+
38
  <!-- description end -->
39
 
40
 
 
88
 
89
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
90
  | ---- | ---- | ---- | ---- | ---- | ----- |
91
+ | [Qwen2.5-Coder-32B-Instruct.IQ1_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ1_S.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ1_S.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ1_S | 1 | 6.8 GB| 7.8 GB | smallest, significant quality loss |
92
+ | [Qwen2.5-Coder-32B-Instruct.IQ1_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ1_M.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ1_M.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ1_M | 1 | 7.4 GB| 8.4 GB | very small, significant quality loss |
93
+ | [Qwen2.5-Coder-32B-Instruct.IQ2_XXS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ2_XXS.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ2_XXS.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ2_XXS | 2 | 8.4 GB| 9.4 GB | very small, high quality loss |
94
+ | [Qwen2.5-Coder-32B-Instruct.IQ2_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ2_XS.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ2_XS.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ2_XS | 2 | 9.3 GB| 10.3 GB | very small, high quality loss |
95
+ | [Qwen2.5-Coder-32B-Instruct.IQ2_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ2_S.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ2_S.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ2_S | 2 | 9.7 GB| 10.7 GB | small, substantial quality loss |
96
+ | [Qwen2.5-Coder-32B-Instruct.IQ2_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ2_M.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ2_M.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ2_M | 2 | 10.5 GB| 11.5 GB | small, greater quality loss |
97
+ | [Qwen2.5-Coder-32B-Instruct.IQ3_XXS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ3_XXS.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ3_XXS.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ3_XXS | 3 | 11.9 GB| 12.9 GB | very small, high quality loss |
98
+ | [Qwen2.5-Coder-32B-Instruct.IQ3_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ3_XS.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ3_XS.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ3_XS | 3 | 12.8 GB| 13.8 GB | small, substantial quality loss |
99
+ | [Qwen2.5-Coder-32B-Instruct.IQ3_S.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ3_S.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ3_S.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ3_S | 3 | 13.4 GB| 14.4 GB | small, greater quality loss |
100
+ | [Qwen2.5-Coder-32B-Instruct.IQ3_M.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ3_M.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ3_M.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ3_M | 3 | 13.8 GB| 14.8 GB | medium, balanced quality - recommended |
101
+ | [Qwen2.5-Coder-32B-Instruct.IQ4_XS.gguf](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.IQ4_XS.gguf) ([with YaRN](https://ciscai-gguf-editor.hf.space/download/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/Qwen2.5-Coder-32B-Instruct.IQ4_XS.gguf?branch=main&add=%5B%22qwen2.context_length%22,4,131072%5D&add=%5B%22qwen2.rope.scaling.type%22,8,%22yarn%22%5D&add=%5B%22qwen2.rope.scaling.factor%22,6,4%5D&add=%5B%22qwen2.rope.scaling.original_context_length%22,4,32768%5D)) | IQ4_XS | 4 | 16.5 GB| 17.5 GB | small, substantial quality loss |
102
 
103
  Generated importance matrix file: [Qwen2.5-Coder-32B-Instruct.imatrix.dat](https://huggingface.co/CISCai/Qwen2.5-Coder-32B-Instruct-SOTA-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct.imatrix.dat)
104