You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

This model is a finetuned version of the openai/whisper-large-v3 which was trained on 2 stages:

  • It was first trained on very weak annotated dataset djelia/bambara-audio / multi-combined config.
  • Then trained again on a relatively high quality dataset djelia/bambara-asr / multi-combined config. The model obtained a WER of 24% and a CER of 11.08% on test split of the djelia/bambara-asr.

This model is available in demo here: DEMO

Downloads last month
12
Safetensors
Model size
1.54B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for djelia/bm-whisper-large-v3-tuned

Finetuned
(413)
this model

Datasets used to train djelia/bm-whisper-large-v3-tuned

Space using djelia/bm-whisper-large-v3-tuned 1