--- base_model: cardiffnlp/twitter-xlm-roberta-base-sentiment metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer model-index: - name: fineTuningXLMRoberta-TokenClassification-Spacy results: [] --- # fineTuningXLMRoberta-TokenClassification-Spacy This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8479 - Precision: 0.2076 - Recall: 0.2102 - F1: 0.2089 - Accuracy: 0.6718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 31 | 0.7433 | 0.2164 | 0.1421 | 0.1716 | 0.6557 | | No log | 2.0 | 62 | 0.7177 | 0.2275 | 0.1848 | 0.2039 | 0.6727 | | No log | 3.0 | 93 | 0.7054 | 0.1719 | 0.1949 | 0.1827 | 0.6637 | | No log | 4.0 | 124 | 0.7148 | 0.1823 | 0.1919 | 0.1869 | 0.6628 | | No log | 5.0 | 155 | 0.7018 | 0.2063 | 0.2061 | 0.2062 | 0.6853 | | No log | 6.0 | 186 | 0.7310 | 0.1866 | 0.1919 | 0.1892 | 0.6711 | | No log | 7.0 | 217 | 0.7272 | 0.2150 | 0.2071 | 0.2110 | 0.6897 | | No log | 8.0 | 248 | 0.7878 | 0.1758 | 0.1848 | 0.1802 | 0.6582 | | No log | 9.0 | 279 | 0.7727 | 0.2080 | 0.2071 | 0.2075 | 0.6814 | | No log | 10.0 | 310 | 0.8099 | 0.1969 | 0.1959 | 0.1964 | 0.6688 | | No log | 11.0 | 341 | 0.8119 | 0.2062 | 0.2030 | 0.2046 | 0.6766 | | No log | 12.0 | 372 | 0.8227 | 0.2105 | 0.2112 | 0.2108 | 0.6770 | | No log | 13.0 | 403 | 0.8300 | 0.2008 | 0.2051 | 0.2029 | 0.6744 | | No log | 14.0 | 434 | 0.8409 | 0.2064 | 0.2081 | 0.2073 | 0.6739 | | No log | 15.0 | 465 | 0.8479 | 0.2076 | 0.2102 | 0.2089 | 0.6718 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1