Whisper tiny AR - BH

This model is a fine-tuned version of openai/whisper-tiny on the quran-ayat-speech-to-text dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1727
  • Wer: 399.9312
  • Cer: 210.6474

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 14
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Cer Validation Loss Wer
No log 0.6349 10 186.5657 9.8260 332.2476
No log 1.3175 20 186.5463 8.9427 332.5733
No log 1.9524 30 179.3050 7.4165 329.6417
No log 2.5714 40 173.6168 4.7760 313.5179
6.244 3.1905 50 173.0538 1.3689 295.1140
6.244 3.8254 60 119.1807 0.5697 184.8534
6.244 4.4444 70 113.7061 0.4699 172.4756
6.244 5.3175 80 126.2085 0.4148 175.2443
6.244 5.9524 90 127.4316 0.3749 230.9446
0.3668 6.5714 100 146.0299 0.3359 242.0195
0.3668 7.3175 110 217.5306 0.3022 297.0684
0.3668 7.9524 120 280.9163 0.2764 312.5407
0.3668 8.6349 130 232.8286 0.2581 364.9837
0.3668 9.3175 140 247.0200 0.2431 348.0456
0.2195 9.9524 150 282.1588 0.2310 503.4202
0.2195 10.6349 160 274.8204 0.2206 514.1694
0.2195 11.3175 170 289.0118 0.2122 520.3583
0.2195 11.9524 180 230.2466 0.2032 469.2182
0.2195 12.6349 190 193.6711 0.1966 419.8697
0.166 13.3175 200 0.1915 570.1954 277.9654
0.166 13.9524 210 0.1847 619.8697 296.8162

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
50
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Baselhany/test_basel

Finetuned
(1343)
this model