This is a reconversion / quantization of https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO

There was a breaking change in llama.cpp's GGUF file format in https://github.com/ggerganov/llama.cpp/pull/6387 and the https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF repo hasn't been updated since. This prevents one to memory-map the model, causing it to take much longer to load than needed when the file is already in the IO cache.

Downloads last month
0
GGUF
Model size
46.7B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.