camembert-sentiment-allocine
This model is a fine-tuned version of camembert-base on the allocine dataset.
Intended uses & limitations
This model has been trained for a single epoch for testing purposes.
Training procedure
This model has been created by fine-tuning the TensorFlow version camembert-base after freezing the encoder part:
model.roberta.trainable = False
Therefore, only the classifier head parameters have been updated during training.
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {
'name': 'Adam',
'learning_rate': {
'class_name': 'PolynomialDecay',
'config': {'initial_learning_rate': 5e-05, 'decay_steps': 15000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}
},
'decay': 0.0,
'beta_1': 0.9,
'beta_2': 0.999,
'epsilon': 1e-07,
'amsgrad': False
}
- training_precision: float32
- epochs: 1
Training results
The model achieves the following results on the test set:
Accuracy |
---|
0.918 |
Framework versions
- Transformers 4.22.2
- TensorFlow 2.8.2
- Datasets 2.5.2
- Tokenizers 0.12.1
- Downloads last month
- 30
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.