Kartoffel-Deepfry-12B

mistral-nemo-kartoffel-12B finetuned on Schule-DPO

Method

QLoRA ORPO tuned with 1x RTX A6000 for 5 epochs. Rank 16 LoRA 32 alpha, 2e-4 learning rate cosine schedule.

Downloads last month
17
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for nbeerbower/Kartoffel-Deepfry-12B

Dataset used to train nbeerbower/Kartoffel-Deepfry-12B