arabert_no_augmentation_organization_task3_fold0

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1570
  • Qwk: 0.0530
  • Mse: 1.1570
  • Rmse: 1.0757

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.3333 2 5.0689 -0.0476 5.0689 2.2514
No log 0.6667 4 1.7860 -0.0722 1.7860 1.3364
No log 1.0 6 1.1691 -0.0732 1.1691 1.0813
No log 1.3333 8 1.1079 -0.1579 1.1079 1.0526
No log 1.6667 10 1.1020 -0.0565 1.1020 1.0498
No log 2.0 12 1.5769 -0.0942 1.5769 1.2558
No log 2.3333 14 1.4104 0.0 1.4104 1.1876
No log 2.6667 16 1.0770 -0.0565 1.0770 1.0378
No log 3.0 18 0.9699 -0.1159 0.9699 0.9848
No log 3.3333 20 0.9699 -0.3200 0.9699 0.9848
No log 3.6667 22 1.0176 0.2080 1.0176 1.0087
No log 4.0 24 1.0147 0.2080 1.0147 1.0073
No log 4.3333 26 0.9831 -0.1440 0.9831 0.9915
No log 4.6667 28 0.9729 -0.1440 0.9729 0.9864
No log 5.0 30 0.9957 0.0610 0.9957 0.9979
No log 5.3333 32 1.0175 0.0610 1.0175 1.0087
No log 5.6667 34 1.0829 -0.1440 1.0829 1.0406
No log 6.0 36 1.1247 -0.1440 1.1247 1.0605
No log 6.3333 38 1.1230 -0.1440 1.1230 1.0597
No log 6.6667 40 1.1027 0.0435 1.1027 1.0501
No log 7.0 42 1.1089 0.0435 1.1089 1.0530
No log 7.3333 44 1.1313 0.0435 1.1313 1.0636
No log 7.6667 46 1.1576 -0.2384 1.1576 1.0759
No log 8.0 48 1.2109 -0.1440 1.2109 1.1004
No log 8.3333 50 1.2319 -0.1440 1.2319 1.1099
No log 8.6667 52 1.2094 -0.1440 1.2094 1.0997
No log 9.0 54 1.1764 0.0530 1.1764 1.0846
No log 9.3333 56 1.1550 0.0530 1.1550 1.0747
No log 9.6667 58 1.1531 0.0530 1.1531 1.0738
No log 10.0 60 1.1570 0.0530 1.1570 1.0757

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
163
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for MayBashendy/arabert_no_augmentation_organization_task3_fold0

Finetuned
(4222)
this model