Transformers
GGUF
Inference Endpoints
conversational
HolyNemo-12B-GGUF / README.md
munish0838's picture
Upload README.md with huggingface_hub
3acb6b8 verified
|
raw
history blame
951 Bytes
metadata
library_name: transformers
base_model:
  - mistralai/Mistral-Nemo-Instruct-2407
datasets:
  - nbeerbower/bible-dpo
license: apache-2.0

QuantFactory/HolyNemo-12B-GGUF

This is quantized version of nbeerbower/HolyNemo-12B created using llama.cpp

Original Model Card

HolyNemo-12B

mistralai/Mistral-Nemo-Instruct-2407 finetuned on nbeerbower/bible-dpo.

Method

Finetuned using an A100 on Google Colab for 1 epoch.

Fine-tune Llama 3 with ORPO