Whisper Finetuning - Validation loss is increasing but WER is Decreasing

#107
by anahar - opened

Hello,

I've been fine-tuning the Whisper model for my specific use case, but the results aren't meeting expectations. Upon reviewing the logs from the Whisper fine-tuning event, I've noticed an intriguing pattern: while the Word Error Rate (WER) decreases, the validation loss of the model increases. This goes against the expected behavior, where both the validation loss and WER typically decrease during training( correct me If I'm wrong).

Here are the links to the training logs showcasing this behavior:

Whisper fine-tuned model by @razhan : https://huggingface.co/razhan/whisper-small-ckb
Whisper fine-tuned model by @BlueRaccoon : https://huggingface.co/BlueRaccoon/whisper-small-en

I'm curious to understand why there's a discrepancy between the rising validation loss and decreasing WER during the fine-tuning process. Any insights or guidance would be greatly appreciated. Thank you!

I met a similar weird question: the training loss decreases normally, while the WER increases to ~90%. I am so confused.

Sign up or log in to comment