Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -10,13 +10,21 @@ tags:
|
|
10 |
- mergekit
|
11 |
- merge
|
12 |
---
|
|
|
|
|
13 |
static quants of https://huggingface.co/wolfram/miquliz-120b-v2.0
|
14 |
|
15 |
weighted/imatrix quants available at https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF
|
16 |
|
17 |
While other static and imatrix quants are available already, I wanted a wider selection of quants available for this model.
|
18 |
-
|
19 |
<!-- provided-files -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
## Provided Quants
|
21 |
|
22 |
| Link | Type | Size/GB | Notes |
|
@@ -34,4 +42,5 @@ While other static and imatrix quants are available already, I wanted a wider se
|
|
34 |
| [PART 1](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q6_K.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q6_K.gguf.split-ab) [PART 3](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q6_K.gguf.split-ac) | Q6_K | 99.0 | very good quality |
|
35 |
| [PART 1](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q8_0.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q8_0.gguf.split-ab) [PART 3](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q8_0.gguf.split-ac) | Q8_0 | 128.1 | fast, best quality |
|
36 |
|
|
|
37 |
<!-- end -->
|
|
|
10 |
- mergekit
|
11 |
- merge
|
12 |
---
|
13 |
+
## About
|
14 |
+
|
15 |
static quants of https://huggingface.co/wolfram/miquliz-120b-v2.0
|
16 |
|
17 |
weighted/imatrix quants available at https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF
|
18 |
|
19 |
While other static and imatrix quants are available already, I wanted a wider selection of quants available for this model.
|
|
|
20 |
<!-- provided-files -->
|
21 |
+
|
22 |
+
## Usage
|
23 |
+
|
24 |
+
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
25 |
+
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
|
26 |
+
more details, including on how to concatenate multi-part files.
|
27 |
+
|
28 |
## Provided Quants
|
29 |
|
30 |
| Link | Type | Size/GB | Notes |
|
|
|
42 |
| [PART 1](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q6_K.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q6_K.gguf.split-ab) [PART 3](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q6_K.gguf.split-ac) | Q6_K | 99.0 | very good quality |
|
43 |
| [PART 1](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q8_0.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q8_0.gguf.split-ab) [PART 3](https://huggingface.co/mradermacher/miquliz-120b-v2.0-GGUF/resolve/main/miquliz-120b-v2.0.Q8_0.gguf.split-ac) | Q8_0 | 128.1 | fast, best quality |
|
44 |
|
45 |
+
|
46 |
<!-- end -->
|