This model is a finetuned version of the openai/whisper-large-v3 which was trained on 2 stages:
- It was first trained on very weak annotated dataset djelia/bambara-audio / multi-combined config.
- Then trained again on a relatively high quality dataset djelia/bambara-asr / multi-combined config. The model obtained a WER of 24% and a CER of 11.08% on test split of the djelia/bambara-asr.
This model is available in demo here: DEMO
- Downloads last month
- 12
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for djelia/bm-whisper-large-v3-tuned
Base model
openai/whisper-large-v3