Edit model card

scenario-kd-pre-ner-full-xlmr_data-univner_en55

This model is a fine-tuned version of FacebookAI/xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 54.7950
  • Precision: 0.7503
  • Recall: 0.7526
  • F1: 0.7514
  • Accuracy: 0.9804

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 32
  • seed: 55
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
118.0158 1.2755 500 81.6998 0.7203 0.6398 0.6776 0.9757
69.7238 2.5510 1000 66.5568 0.7159 0.7277 0.7218 0.9783
60.1647 3.8265 1500 61.3497 0.7533 0.7112 0.7316 0.9795
55.862 5.1020 2000 58.5174 0.7655 0.7298 0.7472 0.9810
53.2325 6.3776 2500 56.8604 0.7608 0.7277 0.7439 0.9802
51.644 7.6531 3000 55.4373 0.7553 0.7443 0.7497 0.9808
50.5439 8.9286 3500 54.7950 0.7503 0.7526 0.7514 0.9804

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
235M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for haryoaw/scenario-kd-pre-ner-full-xlmr_data-univner_en55

Finetuned
(2532)
this model