metadata
language:
- ga
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- ymoslem/IWSLT2023-GA-EN
- ymoslem/FLEURS-GA-EN
- ymoslem/BitesizeIrish-GA-EN
- ymoslem/SpokenWords-GA-EN-MTed
metrics:
- bleu
- wer
model-index:
- name: Whisper Medium GA-EN Speech Translation
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IWSLT-2023, FLEURS, BiteSize, and SpokenWords
type: ymoslem/IWSLT2023-GA-EN
metrics:
- name: Bleu
type: bleu
value: 27.06
- name: Wer
type: wer
value: 73.4804142278253
Whisper Medium GA-EN Speech Translation
This model is a fine-tuned version of openai/whisper-small on the IWSLT-2023, FLEURS, BiteSize, and SpokenWords dataset. It achieves the following results on the evaluation set:
- Loss: 1.2998
- Bleu: 27.06
- Chrf: 47.61
- Wer: 73.4804
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 2000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer |
---|---|---|---|---|---|---|
2.5227 | 0.05 | 100 | 1.05 | 12.82 | 2.4253 | 343.2238 |
2.4775 | 0.11 | 200 | 10.04 | 24.39 | 2.0665 | 95.2724 |
2.114 | 0.16 | 300 | 8.79 | 28.6 | 1.9792 | 141.9181 |
1.9813 | 0.22 | 400 | 17.5 | 33.84 | 1.7596 | 82.8906 |
1.6979 | 0.27 | 500 | 13.89 | 33.51 | 1.6820 | 115.0383 |
1.7157 | 0.32 | 600 | 18.54 | 36.44 | 1.5795 | 91.4003 |
1.3845 | 0.38 | 700 | 19.51 | 39.03 | 1.4989 | 88.7888 |
1.3803 | 0.43 | 800 | 25.18 | 40.96 | 1.4176 | 69.5182 |
1.1 | 0.49 | 900 | 28.98 | 44.78 | 1.3666 | 65.9613 |
1.1843 | 0.54 | 1000 | 27.59 | 45.91 | 1.3298 | 70.4638 |
1.1317 | 0.59 | 1100 | 1.5018 | 20.22 | 41.14 | 86.9878 |
1.071 | 0.65 | 1200 | 1.4600 | 20.67 | 40.43 | 85.6371 |
1.1542 | 0.7 | 1300 | 1.4114 | 26.84 | 43.76 | 69.5182 |
1.0729 | 0.76 | 1400 | 1.4056 | 22.98 | 42.65 | 78.0729 |
0.8747 | 0.81 | 1500 | 1.3537 | 24.65 | 44.89 | 73.4804 |
0.8626 | 0.86 | 1600 | 1.3391 | 28.0 | 46.03 | 68.7978 |
0.7643 | 0.92 | 1700 | 1.3250 | 27.23 | 45.31 | 70.3287 |
0.6971 | 0.97 | 1800 | 1.2795 | 30.05 | 48.28 | 65.5110 |
0.3055 | 1.02 | 1900 | 1.2994 | 27.41 | 47.91 | 71.1842 |
0.2801 | 1.08 | 2000 | 1.2998 | 27.06 | 47.61 | 73.4804 |
Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2