--- license: mit base_model: FacebookAI/roberta-large tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: absa-train-service-roberta-large results: [] --- [Visualize in Weights & Biases](https://wandb.ai/cunho2803032003/absa-1721959498.2993438/runs/tad25dun) [Visualize in Weights & Biases](https://wandb.ai/cunho2803032003/absa-1721959940.7872202/runs/bsprskdy) # absa-train-service-roberta-large This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8683 - Accuracy: 0.7424 - Precision: 0.7345 - Recall: 0.7367 - F1: 0.7302 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 2.2255 | 1.0 | 469 | 2.0677 | 0.3296 | 0.1937 | 0.3250 | 0.2297 | | 1.8236 | 2.0 | 938 | 1.7061 | 0.504 | 0.5413 | 0.4914 | 0.4567 | | 1.5384 | 3.0 | 1407 | 1.4381 | 0.552 | 0.5944 | 0.5549 | 0.5196 | | 1.4301 | 4.0 | 1876 | 1.3316 | 0.5984 | 0.6000 | 0.5990 | 0.5618 | | 1.3776 | 5.0 | 2345 | 1.1645 | 0.6576 | 0.6817 | 0.6491 | 0.6332 | | 1.2078 | 6.0 | 2814 | 1.0967 | 0.6448 | 0.7035 | 0.6348 | 0.6110 | | 1.2535 | 7.0 | 3283 | 1.0565 | 0.7008 | 0.7467 | 0.6967 | 0.7066 | | 1.2921 | 8.0 | 3752 | 1.0049 | 0.6976 | 0.7013 | 0.6884 | 0.6813 | | 1.178 | 9.0 | 4221 | 1.0438 | 0.648 | 0.7746 | 0.6423 | 0.6387 | | 1.2324 | 10.0 | 4690 | 1.0203 | 0.6896 | 0.7096 | 0.6831 | 0.6704 | | 1.1899 | 11.0 | 5159 | 1.0193 | 0.6864 | 0.7391 | 0.6819 | 0.6834 | | 1.1515 | 12.0 | 5628 | 0.9722 | 0.6944 | 0.7164 | 0.6924 | 0.6860 | | 1.1604 | 13.0 | 6097 | 0.9372 | 0.7312 | 0.7543 | 0.7311 | 0.7259 | | 1.1229 | 14.0 | 6566 | 0.9265 | 0.72 | 0.7278 | 0.7139 | 0.7147 | | 1.1459 | 15.0 | 7035 | 0.8896 | 0.7376 | 0.7264 | 0.7323 | 0.7183 | | 1.1281 | 16.0 | 7504 | 0.9074 | 0.7152 | 0.7107 | 0.7087 | 0.7012 | | 1.1794 | 17.0 | 7973 | 0.8914 | 0.7424 | 0.7293 | 0.7354 | 0.7266 | | 1.1101 | 18.0 | 8442 | 0.8707 | 0.7216 | 0.7161 | 0.7141 | 0.7059 | | 1.1215 | 19.0 | 8911 | 0.8656 | 0.7408 | 0.7322 | 0.7348 | 0.7274 | | 1.0483 | 20.0 | 9380 | 0.8683 | 0.7424 | 0.7345 | 0.7367 | 0.7302 | ### Framework versions - Transformers 4.43.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1