Spaces:
Sleeping
Sleeping
File size: 1,783 Bytes
7ae2baf d9bd25a 7ae2baf d9bd25a 7ae2baf d9bd25a 7ae2baf d9bd25a 7ae2baf d9bd25a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
title: WER
emoji: 🖩
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
tags:
- evaluate
- wer
- neuralspace
- STT
---
## About this Demo
This demo was built as a part of [NeuralSpace](https://neuralspace.ai/)'s [VoiceAI](https://voice.neuralspace.ai/) blog on [Word Error Rate 101: Your Guide to STT Vendor Evaluation](voice.neuralspace.ai).
## What is WER?
WER or Word Error Rate is a metric used primarily in the field of speech recognition to measure the performance of an automatic speech recognition (ASR) system. WER calculates the minimum number of operations (substitutions, deletions, and insertions) required to change the system's transcription (prediction) into the reference transcription (truth), divided by the number of words in the reference.
```WER = (Substitutions + Insertions + Deletions)/Total number of words in truth```
WER can range from 0 to infinite. The closer the WER is to 0, the better. WER is often also represented as a percentage. It is usually calculated by just multiplying 100 to it. For example, a WER of 0.15 might also be represented as 15%.
WER is important because it provides:
* **Performance Measure**: It gives an objective measure of how well an ASR system is transcribing speech into text.
* **Comparison**: It allows for comparison between different ASR systems or versions of a system.
## References
* Python package to calculate WER: [jiwer](https://jitsi.github.io/jiwer/)
## FAQ
Have any questions or comments? Reach out to NeuralSpace at [email protected]. Interested to try out [NeuralSpace VoiceAI](https://voice.neuralspace.ai/) for your enterprise? Book time with our expert [here]().
Space authored by : [Aditya Dalmia](https://www.linkedin.com/in/dalmeow/)
|