DistilFT-English-10m

This model is a fine-tuned version of distil-small.en on the librispeech dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5012
  • Wer: 3.5814

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • training_steps: 1000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.5641 33.3333 100 0.9641 3.4754
0.3271 66.6667 200 0.7822 3.4652
0.0871 100.0 300 0.5731 3.4530
0.0149 133.3333 400 0.5142 3.4774
0.0043 166.6667 500 0.5051 3.5345
0.0026 200.0 600 0.5030 3.5569
0.002 233.3333 700 0.5020 3.5671
0.0016 266.6667 800 0.5015 3.5773
0.0014 300.0 900 0.5013 3.5936
0.0014 333.3333 1000 0.5012 3.5814

Framework versions

  • Transformers 4.41.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
76
Safetensors
Model size
166M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train Pageee/DistilFT-English-10ma

Collection including Pageee/DistilFT-English-10ma

Evaluation results