Sampling:
Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near 0.3 at first or else you will get some weird results. This is mentioned by MistralAI at their Transformers section.
In my personal testing, Flash-Attention seems to have seem weird effects with the model as well, however there is no confirmation on this.
Original Model: BeaverAI/mistral-doryV2-12b
How to Use: llama.cpp
Original Model License: Apache 2.0
Release Used: b3441
Quants
Name | Quant Type | Size |
---|---|---|
mistral-doryV2-12b-Q2_K.gguf | Q2_K | 4.79 GB |
mistral-doryV2-12b-Q3_K_S.gguf | Q3_K_S | 5.53 GB |
mistral-doryV2-12b-Q3_K_M.gguf | Q3_K_M | 6.08 GB |
mistral-doryV2-12b-Q3_K_L.gguf | Q3_K_L | 6.56 GB |
mistral-doryV2-12b-Q4_K_S.gguf | Q4_K_S | 7.12 GB |
mistral-doryV2-12b-Q4_K_M.gguf | Q4_K_M | 7.48 GB |
mistral-doryV2-12b-Q5_K_S.gguf | Q5_K_S | 8.52 GB |
mistral-doryV2-12b-Q5_K_M.gguf | Q5_K_M | 8.73 GB |
mistral-doryV2-12b-Q6_K.gguf | Q6_K | 10.1 GB |
mistral-doryV2-12b-Q8_0.gguf | Q8_0 | 13 GB |
- Downloads last month
- 125
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for starble-dev/mistral-doryV2-12b-gguf
Base model
mistralai/Mistral-Nemo-Base-2407
Finetuned
BeaverAI/mistral-doryV2-12b