auto-patch README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,7 @@ base_model: aixonlab/Zinakha-12b
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
|
|
6 |
quantized_by: mradermacher
|
7 |
tags: []
|
8 |
---
|
@@ -16,6 +17,7 @@ tags: []
|
|
16 |
weighted/imatrix quants of https://huggingface.co/aixonlab/Zinakha-12b
|
17 |
|
18 |
<!-- provided-files -->
|
|
|
19 |
## Usage
|
20 |
|
21 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
6 |
+
license: apache-2.0
|
7 |
quantized_by: mradermacher
|
8 |
tags: []
|
9 |
---
|
|
|
17 |
weighted/imatrix quants of https://huggingface.co/aixonlab/Zinakha-12b
|
18 |
|
19 |
<!-- provided-files -->
|
20 |
+
static quants are available at https://huggingface.co/mradermacher/Zinakha-12b-GGUF
|
21 |
## Usage
|
22 |
|
23 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|