mradermacher commited on
Commit
4637ef6
·
verified ·
1 Parent(s): f0d31ef

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -31,10 +31,29 @@ more details, including on how to concatenate multi-part files.
31
 
32
  | Link | Type | Size/GB | Notes |
33
  |:-----|:-----|--------:|:------|
 
 
 
 
 
 
 
34
  | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
35
  | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
 
 
 
36
  | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
 
 
 
 
37
  | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
 
 
 
 
 
38
 
39
  Here is a handy graph by ikawrakow comparing some lower-quality quant
40
  types (lower is better):
 
31
 
32
  | Link | Type | Size/GB | Notes |
33
  |:-----|:-----|--------:|:------|
34
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
35
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
36
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
37
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
38
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
39
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
40
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
41
  | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
42
  | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
43
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
44
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
45
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
46
  | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
47
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
48
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
49
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
50
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
51
  | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
52
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
53
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
54
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
55
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
56
+ | [GGUF](https://huggingface.co/mradermacher/GPT4chan-24B-i1-GGUF/resolve/main/GPT4chan-24B.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
57
 
58
  Here is a handy graph by ikawrakow comparing some lower-quality quant
59
  types (lower is better):