Whisper base acholi

This model is a fine-tuned version of openai/whisper-small on the Sunbird_salt dataset. It achieves the following results on the evaluation set:

  • Loss: 6.3204
  • Wer: 125.9121

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
3.9219 6.6225 1000 2.8840 196.1646
2.2828 13.2450 2000 2.8298 129.9345
1.399 19.8675 3000 3.3370 135.6408
0.5689 26.4901 4000 3.9490 141.4406
0.1519 33.1126 5000 4.4924 117.0253
0.0408 39.7351 6000 4.8503 130.4958
0.0176 46.3576 7000 5.1254 123.5734
0.0101 52.9801 8000 5.2911 128.7184
0.0049 59.6026 9000 5.5606 145.7437
0.004 66.2252 10000 5.6918 131.7119
0.003 72.8477 11000 5.8036 130.5893
0.0021 79.4702 12000 5.9199 127.5023
0.0008 86.0927 13000 6.0288 134.5182
0.0021 92.7152 14000 6.0003 133.4892
0.0006 99.3377 15000 6.1112 123.0122
0.0003 105.9603 16000 6.1775 122.1703
0.0002 112.5828 17000 6.2225 125.6314
0.0002 119.2053 18000 6.2691 126.3798
0.0002 125.8278 19000 6.3077 125.7250
0.0002 132.4503 20000 6.3204 125.9121

Framework versions

  • Transformers 4.47.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 3.0.2
  • Tokenizers 0.20.1
Downloads last month
17
Safetensors
Model size
72.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Tobius/acholi_model_whisper

Finetuned
(2100)
this model

Evaluation results