|
--- |
|
license: apache-2.0 |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- wer |
|
model-index: |
|
- name: whisper-large-v2-atco2-asr |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# whisper-large-v2-atco2-asr |
|
|
|
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.7915 |
|
- Wer: 18.7722 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-05 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 100 |
|
- training_steps: 2800 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Wer | |
|
|:-------------:|:-----:|:----:|:---------------:|:-------:| |
|
| 0.1333 | 3.57 | 100 | 0.5298 | 21.8861 | |
|
| 0.0338 | 7.14 | 200 | 0.5430 | 18.8167 | |
|
| 0.0132 | 10.71 | 300 | 0.5830 | 17.9270 | |
|
| 0.0067 | 14.29 | 400 | 0.6011 | 17.6157 | |
|
| 0.0009 | 17.86 | 500 | 0.6582 | 18.8167 | |
|
| 0.0004 | 21.43 | 600 | 0.6743 | 18.7722 | |
|
| 0.0003 | 25.0 | 700 | 0.6919 | 18.4609 | |
|
| 0.0004 | 28.57 | 800 | 0.6943 | 26.6459 | |
|
| 0.0004 | 32.14 | 900 | 0.7090 | 18.5053 | |
|
| 0.0002 | 35.71 | 1000 | 0.7212 | 18.8167 | |
|
| 0.0001 | 39.29 | 1100 | 0.7305 | 18.8612 | |
|
| 0.0001 | 42.86 | 1200 | 0.7383 | 18.6388 | |
|
| 0.0001 | 46.43 | 1300 | 0.7451 | 18.5498 | |
|
| 0.0001 | 50.0 | 1400 | 0.7515 | 18.5498 | |
|
| 0.0001 | 53.57 | 1500 | 0.7573 | 18.5498 | |
|
| 0.0001 | 57.14 | 1600 | 0.7622 | 18.5943 | |
|
| 0.0001 | 60.71 | 1700 | 0.7666 | 18.5943 | |
|
| 0.0001 | 64.29 | 1800 | 0.7705 | 18.5498 | |
|
| 0.0001 | 67.86 | 1900 | 0.7744 | 18.6833 | |
|
| 0.0001 | 71.43 | 2000 | 0.7778 | 18.6833 | |
|
| 0.0001 | 75.0 | 2100 | 0.7808 | 18.7278 | |
|
| 0.0001 | 78.57 | 2200 | 0.7837 | 18.6833 | |
|
| 0.0001 | 82.14 | 2300 | 0.7856 | 18.6388 | |
|
| 0.0001 | 85.71 | 2400 | 0.7881 | 18.6833 | |
|
| 0.0001 | 89.29 | 2500 | 0.7896 | 18.6388 | |
|
| 0.0001 | 92.86 | 2600 | 0.7905 | 18.7278 | |
|
| 0.0001 | 96.43 | 2700 | 0.7915 | 18.8167 | |
|
| 0.0001 | 100.0 | 2800 | 0.7915 | 18.7722 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.30.0.dev0 |
|
- Pytorch 2.0.1+cu117 |
|
- Datasets 2.12.0 |
|
- Tokenizers 0.13.3 |
|
|