Sampling:
Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near 0.3 at first or else you will get some weird results. This is mentioned by MistralAI at their Transformers section.
In my personal testing, Flash-Attention seems to have seem weird effects with the model as well, however there is no confirmation on this.

Original Model: BeaverAI/mistral-doryV2-12b

How to Use: llama.cpp

Original Model License: Apache 2.0

Release Used: b3441

Quants

Downloads last month
125
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for starble-dev/mistral-doryV2-12b-gguf

Quantized
(7)
this model