slack-emoji-generator
This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.3188
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
18.1521 | 2.5 | 10 | 17.1681 |
15.4149 | 5.0 | 20 | 11.3836 |
10.3577 | 7.5 | 30 | 6.3989 |
4.866 | 10.0 | 40 | 1.3543 |
2.276 | 12.5 | 50 | 0.6950 |
1.4199 | 15.0 | 60 | 0.6021 |
1.0376 | 17.5 | 70 | 0.5347 |
0.7419 | 20.0 | 80 | 0.5178 |
0.7222 | 22.5 | 90 | 0.4806 |
0.6657 | 25.0 | 100 | 0.4621 |
0.6824 | 27.5 | 110 | 0.4357 |
0.6017 | 30.0 | 120 | 0.4310 |
0.5891 | 32.5 | 130 | 0.4146 |
0.5646 | 35.0 | 140 | 0.3816 |
0.5527 | 37.5 | 150 | 0.3708 |
0.5031 | 40.0 | 160 | 0.3625 |
0.4712 | 42.5 | 170 | 0.3439 |
0.4247 | 45.0 | 180 | 0.3290 |
0.4768 | 47.5 | 190 | 0.3219 |
0.4949 | 50.0 | 200 | 0.3188 |
Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Tokenizers 0.20.3
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for smjung8710/slack-emoji-generator
Base model
google-t5/t5-small