fine-tuned-DatasetQAS-TYDI-QA-ID-with-xlm-roberta-large-with-ITTL-without-freeze-LR-1e-05
This model is a fine-tuned version of xlm-roberta-large on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9402
- Exact Match: 69.3662
- F1: 82.0036
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Exact Match |
F1 |
6.2837 |
0.5 |
19 |
3.6986 |
8.4507 |
17.7536 |
6.2837 |
0.99 |
38 |
2.5899 |
18.4859 |
29.7766 |
3.6833 |
1.5 |
57 |
1.7044 |
42.6056 |
56.8157 |
3.6833 |
1.99 |
76 |
1.2711 |
53.3451 |
70.2979 |
3.6833 |
2.5 |
95 |
1.1063 |
62.3239 |
75.7765 |
1.5024 |
2.99 |
114 |
1.0275 |
64.2606 |
78.0460 |
1.5024 |
3.5 |
133 |
0.9941 |
65.8451 |
79.1313 |
1.0028 |
3.99 |
152 |
0.9642 |
67.4296 |
80.6196 |
1.0028 |
4.5 |
171 |
0.9682 |
69.0141 |
82.4975 |
1.0028 |
4.99 |
190 |
0.9455 |
67.9577 |
81.0386 |
0.7765 |
5.5 |
209 |
0.9802 |
67.7817 |
81.0844 |
0.7765 |
5.99 |
228 |
0.9402 |
69.3662 |
82.0036 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2