Multilingual NLP
Collection
For assignments
•
13 items
•
Updated
•
1
This model is a fine-tuned version of bert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.9438 | 1.0 | 51 | 0.7768 | 0.7833 |
0.6247 | 2.0 | 102 | 0.5979 | 0.8271 |
0.5404 | 3.0 | 153 | 0.5524 | 0.8427 |
Base model
google-bert/bert-base-multilingual-cased