Edit model card

Model Card for Model ID

Quick Llama 3 8B finetune with ORPO. Demontration that it can be fine tune in 2 hours only. Thanks to Maxime Labonne's notebook:

https://colab.research.google.com/drive/1eHNWg9gnaXErdAa8_mcvjMupbSS6rDvi?usp=sharing

  • Number of training samples from the dataset: 1500 out of 40K
  • Hardware Type: L4
  • Hours of training: 2
  • Cloud Provider: google colab
Downloads last month
6
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train mayacinka/OrpoLlama-3-8B