Edit model card

Whisper large fa - marziye-A

This model is a fine-tuned version of openai/whisper-large on the Common Voice 15.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1571
  • Wer: 19.7418

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.2189 0.1567 2000 0.2248 29.0575
0.1972 0.3134 4000 0.2035 25.1376
0.1906 0.4701 6000 0.1923 25.7159
0.1595 0.6268 8000 0.1806 22.4166
0.1747 0.7835 10000 0.1753 23.0041
0.1744 0.9402 12000 0.1709 22.4932
0.1357 1.0969 14000 0.1687 20.7782
0.1345 1.2536 16000 0.1646 21.3221
0.1362 1.4103 18000 0.1619 21.1082
0.121 1.5670 20000 0.1601 20.3781
0.1354 1.7237 22000 0.1587 19.8157
0.122 1.8804 24000 0.1571 19.7418

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
14
Safetensors
Model size
1.54B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for marziye-A/whisper-large-v3-full-youtube_80hour_7

Finetuned
(42)
this model

Dataset used to train marziye-A/whisper-large-v3-full-youtube_80hour_7

Evaluation results