Edit model card

whisper-tinyfinacialYT

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8105
  • Wer: 64.6067

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.35e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1200

Training results

Training Loss Epoch Step Validation Loss Wer
No log 1.0870 100 0.7181 65.1685
No log 2.1739 200 0.6369 60.6742
No log 3.2609 300 0.6620 60.6742
No log 4.3478 400 0.6909 61.7978
0.3822 5.4348 500 0.7271 73.0337
0.3822 6.5217 600 0.7496 71.3483
0.3822 7.6087 700 0.7742 64.0449
0.3822 8.6957 800 0.7860 64.0449
0.3822 9.7826 900 0.7975 64.0449
0.0087 10.8696 1000 0.8065 64.6067
0.0087 11.9565 1100 0.8098 64.6067
0.0087 13.0435 1200 0.8105 64.6067

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
7
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for controngo/whisper-tinyfinacialYT

Finetuned
(1210)
this model