Hanhpt23's picture
End of training
38f8053 verified
metadata
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-small
    results: []

openai/whisper-small

This model is a fine-tuned version of openai/whisper-small on the Hanhpt23/MultiMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2205
  • Wer: 20.9109
  • Cer: 14.5195

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.6311 1.0 4626 0.7523 29.1582 21.4782
0.4966 2.0 9252 0.7386 30.4497 22.0004
0.2482 3.0 13878 0.7919 24.2525 17.3460
0.1989 4.0 18504 0.8427 25.2392 17.5838
0.1419 5.0 23130 0.9057 23.5514 16.6085
0.0828 6.0 27756 0.9730 22.8122 15.8105
0.0609 7.0 32382 1.0176 22.9715 16.0236
0.0452 8.0 37008 1.0531 23.1268 16.1591
0.0354 9.0 41634 1.1008 23.0799 16.0754
0.0228 10.0 46260 1.1070 22.0164 15.3787
0.02 11.0 50886 1.1342 22.4321 15.5082
0.0126 12.0 55512 1.1520 22.0949 15.3324
0.0062 13.0 60138 1.1820 22.0076 15.3419
0.0045 14.0 64764 1.1860 21.7019 15.1221
0.0025 15.0 69390 1.2004 21.2894 14.7874
0.0058 16.0 74016 1.1949 21.6711 15.1412
0.0005 17.0 78642 1.1967 21.4148 14.8903
0.0001 18.0 83268 1.2102 21.1438 14.6915
0.0001 19.0 87894 1.2192 21.0654 14.6546
0.0001 20.0 92520 1.2205 20.9109 14.5195

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1