llama-3.2-3B-lora-dummy-32 / train_results.json
sizhkhy's picture
Upload folder using huggingface_hub
06eca5e verified
raw
history blame contribute delete
210 Bytes
{
"epoch": 0.992,
"total_flos": 5.244736447827149e+16,
"train_loss": 0.033818339808813984,
"train_runtime": 413.5784,
"train_samples_per_second": 2.418,
"train_steps_per_second": 0.075
}