|
--- |
|
license: apache-2.0 |
|
base_model: distilroberta-base |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: distilroberta-base-DoniaTrials514true |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# distilroberta-base-DoniaTrials514true |
|
|
|
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 2.6702 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 50 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:----:|:---------------:| |
|
| No log | 1.0 | 39 | 7.8456 | |
|
| No log | 2.0 | 78 | 6.4265 | |
|
| No log | 3.0 | 117 | 5.3856 | |
|
| No log | 4.0 | 156 | 4.5975 | |
|
| No log | 5.0 | 195 | 4.0243 | |
|
| No log | 6.0 | 234 | 3.6660 | |
|
| No log | 7.0 | 273 | 3.4572 | |
|
| No log | 8.0 | 312 | 3.3306 | |
|
| No log | 9.0 | 351 | 3.2438 | |
|
| No log | 10.0 | 390 | 3.1784 | |
|
| No log | 11.0 | 429 | 3.1267 | |
|
| No log | 12.0 | 468 | 3.0829 | |
|
| 4.5187 | 13.0 | 507 | 3.0498 | |
|
| 4.5187 | 14.0 | 546 | 3.0194 | |
|
| 4.5187 | 15.0 | 585 | 2.9938 | |
|
| 4.5187 | 16.0 | 624 | 2.9654 | |
|
| 4.5187 | 17.0 | 663 | 2.9417 | |
|
| 4.5187 | 18.0 | 702 | 2.9198 | |
|
| 4.5187 | 19.0 | 741 | 2.8977 | |
|
| 4.5187 | 20.0 | 780 | 2.8808 | |
|
| 4.5187 | 21.0 | 819 | 2.8610 | |
|
| 4.5187 | 22.0 | 858 | 2.8478 | |
|
| 4.5187 | 23.0 | 897 | 2.8297 | |
|
| 4.5187 | 24.0 | 936 | 2.8192 | |
|
| 4.5187 | 25.0 | 975 | 2.8051 | |
|
| 2.9028 | 26.0 | 1014 | 2.7953 | |
|
| 2.9028 | 27.0 | 1053 | 2.7847 | |
|
| 2.9028 | 28.0 | 1092 | 2.7739 | |
|
| 2.9028 | 29.0 | 1131 | 2.7643 | |
|
| 2.9028 | 30.0 | 1170 | 2.7559 | |
|
| 2.9028 | 31.0 | 1209 | 2.7454 | |
|
| 2.9028 | 32.0 | 1248 | 2.7378 | |
|
| 2.9028 | 33.0 | 1287 | 2.7312 | |
|
| 2.9028 | 34.0 | 1326 | 2.7223 | |
|
| 2.9028 | 35.0 | 1365 | 2.7172 | |
|
| 2.9028 | 36.0 | 1404 | 2.7099 | |
|
| 2.9028 | 37.0 | 1443 | 2.7061 | |
|
| 2.9028 | 38.0 | 1482 | 2.6997 | |
|
| 2.7389 | 39.0 | 1521 | 2.6964 | |
|
| 2.7389 | 40.0 | 1560 | 2.6912 | |
|
| 2.7389 | 41.0 | 1599 | 2.6867 | |
|
| 2.7389 | 42.0 | 1638 | 2.6833 | |
|
| 2.7389 | 43.0 | 1677 | 2.6801 | |
|
| 2.7389 | 44.0 | 1716 | 2.6778 | |
|
| 2.7389 | 45.0 | 1755 | 2.6761 | |
|
| 2.7389 | 46.0 | 1794 | 2.6737 | |
|
| 2.7389 | 47.0 | 1833 | 2.6717 | |
|
| 2.7389 | 48.0 | 1872 | 2.6712 | |
|
| 2.7389 | 49.0 | 1911 | 2.6703 | |
|
| 2.7389 | 50.0 | 1950 | 2.6702 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.35.2 |
|
- Pytorch 2.1.0+cu121 |
|
- Datasets 2.17.0 |
|
- Tokenizers 0.15.2 |
|
|