--- license: gemma base_model: google/gemma-2-2b tags: - easylm - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - ultrafeedback-sft model-index: - name: easylm-ultrafeedback-sft-gemma-2-2b results: [] --- # easylm-ultrafeedback-sft-gemma-2-2b This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on the ultrafeedback-sft dataset. It achieves the following results on the evaluation set: - Loss: 1.2897 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 2 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 1.5578 | 0.0371 | 500 | 1.4651 | | 1.4645 | 0.0742 | 1000 | 1.4362 | | 1.4198 | 0.1113 | 1500 | 1.4196 | | 1.3469 | 0.1484 | 2000 | 1.4051 | | 1.3816 | 0.1855 | 2500 | 1.3920 | | 1.3653 | 0.2226 | 3000 | 1.3809 | | 1.4087 | 0.2596 | 3500 | 1.3715 | | 1.2973 | 0.2967 | 4000 | 1.3615 | | 1.348 | 0.3338 | 4500 | 1.3545 | | 1.4639 | 0.3709 | 5000 | 1.3480 | | 1.4405 | 0.4080 | 5500 | 1.3408 | | 1.2926 | 0.4451 | 6000 | 1.3349 | | 1.3452 | 0.4822 | 6500 | 1.3268 | | 1.3076 | 0.5193 | 7000 | 1.3202 | | 1.2696 | 0.5564 | 7500 | 1.3154 | | 1.3833 | 0.5935 | 8000 | 1.3104 | | 1.3217 | 0.6306 | 8500 | 1.3060 | | 1.2351 | 0.6677 | 9000 | 1.3026 | | 1.5295 | 0.7047 | 9500 | 1.2990 | | 1.293 | 0.7418 | 10000 | 1.2967 | | 1.2231 | 0.7789 | 10500 | 1.2942 | | 1.2721 | 0.8160 | 11000 | 1.2926 | | 1.3877 | 0.8531 | 11500 | 1.2913 | | 1.2929 | 0.8902 | 12000 | 1.2903 | | 1.4017 | 0.9273 | 12500 | 1.2900 | | 1.2126 | 0.9644 | 13000 | 1.2897 | ### Framework versions - Transformers 4.43.3 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1