--- language: - zh license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - formospeech/hat_asr_aligned model-index: - name: Whisper Tiny Hakka Simulated Webcam results: [] --- # Whisper Tiny Hakka Simulated Webcam This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the HAT ASR Aligned dataset. It achieves the following results on the evaluation set: - Loss: 0.1402 - Cer: 8.5108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 488 - training_steps: 4880 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.2096 | 0.9980 | 488 | 0.3010 | 27.0939 | | 0.1035 | 1.9959 | 976 | 0.2198 | 18.4063 | | 0.0491 | 2.9939 | 1464 | 0.1966 | 12.8661 | | 0.0261 | 3.9918 | 1952 | 0.1766 | 14.3364 | | 0.0117 | 4.9898 | 2440 | 0.1576 | 10.6133 | | 0.0045 | 5.9877 | 2928 | 0.1425 | 11.8732 | | 0.0014 | 6.9857 | 3416 | 0.1471 | 9.7591 | | 0.0006 | 7.9836 | 3904 | 0.1413 | 8.8356 | | 0.0005 | 8.9816 | 4392 | 0.1413 | 8.6079 | | 0.0003 | 9.9796 | 4880 | 0.1402 | 8.5108 | ### Framework versions - Transformers 4.42.3 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1