--- language: - hy license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Large-v2 Armenian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 hy-AM type: mozilla-foundation/common_voice_11_0 config: hy-AM split: test args: hy-AM metrics: - name: Wer type: wer value: 40.23026315789473 --- # Whisper Large-v2 Armenian This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 hy-AM dataset. It achieves the following results on the evaluation set: - Loss: 0.4429 - Wer: 40.2303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0113 | 8.02 | 200 | 0.3501 | 43.7171 | | 0.0003 | 17.01 | 400 | 0.3989 | 40.7895 | | 0.0001 | 26.0 | 600 | 0.4282 | 40.4605 | | 0.0001 | 34.02 | 800 | 0.4392 | 40.2632 | | 0.0001 | 43.01 | 1000 | 0.4429 | 40.2303 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2