Whisper Small Chinese

This model is a fine-tuned version of openai/whisper-medium on the mozilla-foundation/common_voice_11_0 zh-CN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3226
  • Cer: 10.9782

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.3998 0.1 1000 0.2898 19.1261
0.2414 1.07 2000 0.2826 12.7761
0.1197 2.04 3000 0.2952 12.4320
0.2034 3.0 4000 0.2962 13.1970
0.0344 3.1 5000 0.3039 11.5122
0.0226 4.07 6000 0.3083 11.3549
0.0097 5.04 7000 0.3187 11.4440
0.0121 6.01 8000 0.3173 11.2258
0.0015 6.11 9000 0.3219 11.1410
0.0019 7.07 10000 0.3226 10.9782

Framework versions

  • Transformers 4.28.0.dev0
  • Pytorch 2.0.0+cu117
  • Datasets 2.11.1.dev0
  • Tokenizers 0.13.2
Downloads last month
28
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train lorenzoncina/whisper-medium-zh

Space using lorenzoncina/whisper-medium-zh 1