Whisper Small - Greek (el)
This model is a fine-tuned version of openai/whisper-small on the mozilla-foundation/common_voice_11_0 el dataset for transcription in Greek. It achieves the following results on the evaluation set:
- train_loss: 0.0615
- Wer: 20.2080
Training results
Upon completion of training the best model was reloaded and tested with the following results extracted from the stdout log:
Loading best model from ./whisper-small-el/checkpoint-5000 (score: 20.208023774145616).
{'train_runtime': 73232.697,
'train_samples_per_second': 4.37,
'train_steps_per_second': 0.068,
'train_loss': 0.06146362095708027,
'epoch': 94.34}
TrainOutput(global_step=5000,
training_loss=0.06146362095708027,
metrics={'train_runtime': 73232.697,
'train_samples_per_second': 4.37,
'train_steps_per_second': 0.068,
'train_loss': 0.06146362095708027,
'epoch': 94.34})
Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.7.1.dev0
- Tokenizers 0.12.1
- Downloads last month
- 51
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train farsipal/whisper-small-el
Space using farsipal/whisper-small-el 1
Evaluation results
- Wer on mozilla-foundation/common_voice_11_0 eltest set self-reported25.697