QuantFactory/HolyNemo-12B-GGUF
This is quantized version of nbeerbower/HolyNemo-12B created using llama.cpp
Original Model Card
HolyNemo-12B
mistralai/Mistral-Nemo-Instruct-2407 finetuned on nbeerbower/bible-dpo.
Method
Finetuned using an A100 on Google Colab for 1 epoch.
- Downloads last month
- 89
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for QuantFactory/HolyNemo-12B-GGUF
Base model
mistralai/Mistral-Nemo-Base-2407
Finetuned
mistralai/Mistral-Nemo-Instruct-2407