Baghdad99/saad-opus-en-to-ha
This model is a fine-tuned version of dammyogt/damilola-finetuned-NLP-opus-mt-en-ha on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 2.0684
- Validation Loss: 3.9404
- Epoch: 24
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
Training results
Train Loss | Validation Loss | Epoch |
---|---|---|
5.2122 | 4.9730 | 0 |
4.6878 | 4.6863 | 1 |
4.3583 | 4.5021 | 2 |
4.0971 | 4.3766 | 3 |
3.9068 | 4.2787 | 4 |
3.7374 | 4.2102 | 5 |
3.5895 | 4.1610 | 6 |
3.4658 | 4.1177 | 7 |
3.3482 | 4.0878 | 8 |
3.2306 | 4.0586 | 9 |
3.1316 | 4.0345 | 10 |
3.0379 | 4.0138 | 11 |
2.9368 | 3.9958 | 12 |
2.8479 | 3.9840 | 13 |
2.7622 | 3.9701 | 14 |
2.6861 | 3.9627 | 15 |
2.6094 | 3.9544 | 16 |
2.5217 | 3.9510 | 17 |
2.4538 | 3.9470 | 18 |
2.3906 | 3.9395 | 19 |
2.3162 | 3.9393 | 20 |
2.2632 | 3.9368 | 21 |
2.1933 | 3.9382 | 22 |
2.1325 | 3.9424 | 23 |
2.0684 | 3.9404 | 24 |
Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 35
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Baghdad99/saad-english-text-to-hausa-text
Base model
Helsinki-NLP/opus-mt-en-ha