Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,33 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- mistral
|
5 |
+
- conversational
|
6 |
+
- text-generation-inference
|
7 |
+
base_model: BeaverAI/mistral-doryV2-12b
|
8 |
+
library_name: transformers
|
9 |
+
---
|
10 |
+
|
11 |
+
> [!WARNING]
|
12 |
+
> **Sampling:**<br>
|
13 |
+
> Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near **0.3** at first or else you will get some weird results. This is mentioned by MistralAI at their [Transformers](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407#transformers) section. <br>
|
14 |
+
> Flash-Attention seems to have seem weird effects with the model as well, however there is no confirmation on this.
|
15 |
+
|
16 |
+
**Original Model:**
|
17 |
+
[BeaverAI/mistral-doryV2-12b](https://huggingface.co/BeaverAI/mistral-doryV2-12b)
|
18 |
+
|
19 |
+
**How to Use:**
|
20 |
+
[llama.cpp](https://github.com/ggerganov/llama.cpp)
|
21 |
+
|
22 |
+
**License:**
|
23 |
+
Apache 2.0
|
24 |
+
|
25 |
+
# Quants
|
26 |
+
| Name | Quant Type | Size |
|
27 |
+
| ---- | ---- | ---- |
|
28 |
+
| [mistral-doryV2-12b-Q2_K.gguf](https://huggingface.co/starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q2_K.gguf) | Q2_K | 4.79 GB |
|
29 |
+
| [mistral-doryV2-12b-Q3_K_M.gguf](https://huggingface.co/starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q3_K_M.gguf) | Q3_K_M | 6.08 GB |
|
30 |
+
| [mistral-doryV2-12b-Q4_K_M.gguf](https://huggingface.co/starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q4_K_M.gguf) | Q4_K_M | 7.48 GB |
|
31 |
+
| [mistral-doryV2-12b-Q5_K_M.gguf](https://huggingface.co/starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q5_K_M.gguf) | Q5_K_M | 8.73 GB |
|
32 |
+
| [mistral-doryV2-12b-Q6_K.gguf](https://huggingface.co/starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q6_K.gguf) | Q6_K | 10.1 GB |
|
33 |
+
| [mistral-doryV2-12b-Q8_0.gguf](https://huggingface.co/starble-dev/mistral-doryV2-12b-gguf/blob/main/mistral-doryV2-12b_Q8_0.gguf) | Q8_0 | 13 GB |
|