File size: 4,060 Bytes
0d46a2a dfcc5e1 0d46a2a dfcc5e1 0d46a2a f590d7d 0d46a2a dfcc5e1 efadaf7 85b9cc0 e490ba2 7e69f46 3ec74b6 0371d25 81677b5 f8cd4d5 8310d51 f0fd7c3 294ed47 67efb11 21a2dca 06fc906 76845c3 1bf4750 e837209 a1eca43 d04ace0 e3f413c 4cd7ad7 3f4037b daeb45a c1bcc56 0869303 347edca 7ca895a 7bb0869 5d1c38f 770e0e2 40a415f 512f31f eca1dee 3a4c9df 76a8016 1643384 93ce4d0 550e53f 43cdbb4 203ac4f 8dc924b 578fe0f acfdbd0 f70d20d 8142bef 625fe86 ee9b900 f7f5cdb eb2843a 917a6d3 f1c91be 308dad5 fb85d75 4b3dced b7a1a0e 76ac6cd 51f22af 99383b9 918f54b 9d467c9 b51f2b3 aa073f7 69865b9 f590d7d 0d46a2a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
---
license: apache-2.0
base_model: bedus-creation/eng-limbu-t5-base-all-001
tags:
- generated_from_keras_callback
model-index:
- name: bedus-creation/eng-limbu-t5-base-all-001
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bedus-creation/eng-limbu-t5-base-all-001
This model is a fine-tuned version of [bedus-creation/eng-limbu-t5-base-all-001](https://huggingface.co/bedus-creation/eng-limbu-t5-base-all-001) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7242
- Validation Loss: 2.6274
- Epoch: 64
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.0062 | 6.1115 | 0 |
| 6.0720 | 5.8817 | 1 |
| 5.8833 | 5.7515 | 2 |
| 5.7643 | 5.6312 | 3 |
| 5.6159 | 5.5281 | 4 |
| 5.5133 | 5.4337 | 5 |
| 5.4239 | 5.3227 | 6 |
| 5.3002 | 5.2327 | 7 |
| 5.1915 | 5.1267 | 8 |
| 5.1029 | 5.0370 | 9 |
| 4.9916 | 4.9413 | 10 |
| 4.8633 | 4.8633 | 11 |
| 4.7651 | 4.7806 | 12 |
| 4.6682 | 4.7019 | 13 |
| 4.5570 | 4.6346 | 14 |
| 4.4718 | 4.5772 | 15 |
| 4.3830 | 4.5084 | 16 |
| 4.2749 | 4.4127 | 17 |
| 4.1922 | 4.3616 | 18 |
| 4.1207 | 4.3160 | 19 |
| 4.0164 | 4.2560 | 20 |
| 3.9700 | 4.1961 | 21 |
| 3.8745 | 4.1515 | 22 |
| 3.8068 | 4.0910 | 23 |
| 3.7149 | 4.0444 | 24 |
| 3.6474 | 3.9920 | 25 |
| 3.5522 | 3.9630 | 26 |
| 3.5127 | 3.8822 | 27 |
| 3.4414 | 3.8390 | 28 |
| 3.3722 | 3.7892 | 29 |
| 3.2981 | 3.7517 | 30 |
| 3.2240 | 3.7112 | 31 |
| 3.1878 | 3.6488 | 32 |
| 3.1070 | 3.6168 | 33 |
| 3.0528 | 3.5680 | 34 |
| 2.9806 | 3.5328 | 35 |
| 2.9294 | 3.4970 | 36 |
| 2.8907 | 3.4519 | 37 |
| 2.8304 | 3.4270 | 38 |
| 2.7737 | 3.3785 | 39 |
| 2.7023 | 3.3517 | 40 |
| 2.6705 | 3.3207 | 41 |
| 2.6218 | 3.2700 | 42 |
| 2.5651 | 3.2356 | 43 |
| 2.5065 | 3.2072 | 44 |
| 2.4517 | 3.1826 | 45 |
| 2.4043 | 3.1395 | 46 |
| 2.3662 | 3.0882 | 47 |
| 2.3240 | 3.0693 | 48 |
| 2.2801 | 3.0547 | 49 |
| 2.2304 | 3.0123 | 50 |
| 2.1928 | 2.9626 | 51 |
| 2.1492 | 2.9453 | 52 |
| 2.1062 | 2.9063 | 53 |
| 2.0650 | 2.8974 | 54 |
| 2.0331 | 2.8556 | 55 |
| 1.9951 | 2.8444 | 56 |
| 1.9559 | 2.7950 | 57 |
| 1.9095 | 2.7815 | 58 |
| 1.8837 | 2.7437 | 59 |
| 1.8460 | 2.7277 | 60 |
| 1.8221 | 2.6958 | 61 |
| 1.7798 | 2.6683 | 62 |
| 1.7431 | 2.6418 | 63 |
| 1.7242 | 2.6274 | 64 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|