metadata
library_name: transformers
datasets:
- djelia/bambara-audio
- djelia/bambara-asr
language:
- bm
metrics:
- wer
- cer
base_model:
- openai/whisper-large-v3
This model is a finetuned version of the openai/whisper-large-v3 which was trained on 2 stages:
- It was first trained on very weak annotated dataset djelia/bambara-audio / multi-combined config.
- Then trained again on a relatively high quality dataset djelia/bambara-asr / multi-combined config. The model obtained a WER of 24% and a CER of 11.08% on test split of the djelia/bambara-asr.
This model is available in demo here: DEMO