Hanhpt23's picture
End of training
3841fc9 verified
|
raw
history blame
2.58 kB
metadata
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-tiny
    results: []

openai/whisper-tiny

This model is a fine-tuned version of openai/whisper-tiny on the Hanhpt23/ChineseMed dataset. It achieves the following results on the evaluation set:

  • Loss: 4.7716
  • Wer: 115.5556

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
2.2347 1.0 371 3.0537 111.1111
1.7551 2.0 742 3.0133 105.5556
1.3384 3.0 1113 3.2446 107.7778
0.7899 4.0 1484 3.5971 112.2222
0.424 5.0 1855 3.8711 117.7778
0.179 6.0 2226 4.0705 137.7778
0.0953 7.0 2597 4.2723 112.2222
0.0628 8.0 2968 4.4901 116.6667
0.0386 9.0 3339 4.3978 113.3333
0.0299 10.0 3710 4.5975 113.3333
0.0198 11.0 4081 4.6376 108.8889
0.0074 12.0 4452 4.6874 112.2222
0.0046 13.0 4823 4.6807 110.0000
0.0006 14.0 5194 4.7271 117.7778
0.0052 15.0 5565 4.7211 111.1111
0.0017 16.0 5936 4.7438 112.2222
0.0003 17.0 6307 4.7391 120.0
0.0002 18.0 6678 4.7585 120.0
0.0002 19.0 7049 4.7621 114.4444
0.0002 20.0 7420 4.7716 115.5556

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1