File size: 2,794 Bytes
9a1f30f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: torgo_tiny_finetune_F04_frozen_encoder
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_tiny_finetune_F04_frozen_encoder
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2948
- Wer: 46.1800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.7886 | 0.85 | 500 | 0.2527 | 38.2003 |
| 0.0987 | 1.69 | 1000 | 0.2771 | 51.7827 |
| 0.0695 | 2.54 | 1500 | 0.2463 | 38.6248 |
| 0.0479 | 3.39 | 2000 | 0.2699 | 26.8251 |
| 0.0314 | 4.24 | 2500 | 0.2857 | 23.2598 |
| 0.0239 | 5.08 | 3000 | 0.2698 | 23.6842 |
| 0.0173 | 5.93 | 3500 | 0.2771 | 25.2122 |
| 0.0122 | 6.78 | 4000 | 0.2733 | 26.7402 |
| 0.0099 | 7.63 | 4500 | 0.2812 | 26.5705 |
| 0.0091 | 8.47 | 5000 | 0.2773 | 23.4295 |
| 0.0077 | 9.32 | 5500 | 0.2839 | 30.5603 |
| 0.0057 | 10.17 | 6000 | 0.2722 | 23.7691 |
| 0.0043 | 11.02 | 6500 | 0.2959 | 34.3803 |
| 0.0028 | 11.86 | 7000 | 0.2783 | 33.0221 |
| 0.0026 | 12.71 | 7500 | 0.3000 | 32.7674 |
| 0.0025 | 13.56 | 8000 | 0.2865 | 32.6825 |
| 0.0022 | 14.41 | 8500 | 0.2946 | 38.8795 |
| 0.0014 | 15.25 | 9000 | 0.2858 | 38.3701 |
| 0.0012 | 16.1 | 9500 | 0.2953 | 63.8370 |
| 0.0006 | 16.95 | 10000 | 0.2928 | 42.9542 |
| 0.0004 | 17.8 | 10500 | 0.2910 | 43.7182 |
| 0.0004 | 18.64 | 11000 | 0.2947 | 44.8217 |
| 0.0002 | 19.49 | 11500 | 0.2948 | 46.1800 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.7
- Tokenizers 0.13.3
|