This is a converted model to GGUF from nvidia/Mistral-NeMo-Minitron-8B-Instruct quantized to Q2_K using llama.cpp library.

Downloads last month
1
GGUF
Model size
8.41B params
Architecture
llama

2-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Manel/Mistral-NeMo-Minitron-8B-Instruct-Q2_K-GGUF