Bemba
Collection
Experimental automatic speech recognition models developed for the Bemba language
•
32 items
•
Updated
This model is a fine-tuned version of openai/whisper-small on the BEMBA dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
0.9062 | 1.0 | 5914 | 0.4964 | 0.4258 | 0.1059 |
0.5025 | 2.0 | 11828 | 0.4104 | 0.3567 | 0.0887 |
0.4079 | 3.0 | 17742 | 0.3767 | 0.3252 | 0.0827 |
0.3239 | 4.0 | 23656 | 0.3676 | 0.3133 | 0.0804 |
0.2438 | 5.0 | 29570 | 0.3798 | 0.3219 | 0.0846 |
0.1655 | 6.0 | 35484 | 0.4092 | 0.3124 | 0.0787 |
0.0986 | 7.0 | 41398 | 0.4579 | 0.3251 | 0.0845 |
0.0554 | 8.0 | 47312 | 0.4980 | 0.3231 | 0.0844 |
0.0342 | 9.0 | 53226 | 0.5362 | 0.3174 | 0.0820 |
0.0255 | 10.0 | 59140 | 0.5647 | 0.3150 | 0.0810 |
0.021 | 11.0 | 65054 | 0.5882 | 0.3153 | 0.0797 |
0.0184 | 12.0 | 70968 | 0.6067 | 0.3162 | 0.0805 |
0.0161 | 13.0 | 76882 | 0.6337 | 0.3192 | 0.0842 |
0.0146 | 14.0 | 82796 | 0.6493 | 0.3138 | 0.0819 |
Base model
openai/whisper-small