Spaces:
Runtime error
Runtime error
Update constants.py
Browse files- constants.py +1 -1
constants.py
CHANGED
@@ -110,7 +110,7 @@ Columns `Model`, `RTF`, and `Average WER` were sourced from [hf-audio/open_asr_l
|
|
110 |
Models are sorted by consistancy in their results across testsets. (by increasing order of absolute delta between average WER and CommonVoice WER)
|
111 |
|
112 |
### Results
|
113 |
-
|
114 |
|
115 |
Moreover, it's worth noting that selecting the model with the lowest WER on CommonVoice aligns with choosing the model based on the lowest average WER. This approach proves effective for ranking the best-performing models with precision. However, it's essential to acknowledge that as the average WER increases, the spread of results becomes more pronounced. This can pose challenges in reliably identifying the worst-performing models. The test split size of CommonVoice for a given language is a crucial factor in this context, and it's worth considering. This insight highlights the need for a nuanced approach to ASR model selection, considering various factors, including dataset characteristics, to ensure a comprehensive evaluation of ASR model performance.
|
116 |
|
|
|
110 |
Models are sorted by consistancy in their results across testsets. (by increasing order of absolute delta between average WER and CommonVoice WER)
|
111 |
|
112 |
### Results
|
113 |
+
The CommonVoice Test provides a Word Error Rate (WER) within a 20-point margin of the average WER. While not perfect, this indicates that CommonVoice can be a useful tool for quickly identifying a suitable ASR model for a wide range of languages in a programmatic manner. However, it's important to note that it is not sufficient as the sole criterion for choosing the most appropriate architecture. Further considerations may be needed depending on the specific requirements of your ASR application.
|
114 |
|
115 |
Moreover, it's worth noting that selecting the model with the lowest WER on CommonVoice aligns with choosing the model based on the lowest average WER. This approach proves effective for ranking the best-performing models with precision. However, it's essential to acknowledge that as the average WER increases, the spread of results becomes more pronounced. This can pose challenges in reliably identifying the worst-performing models. The test split size of CommonVoice for a given language is a crucial factor in this context, and it's worth considering. This insight highlights the need for a nuanced approach to ASR model selection, considering various factors, including dataset characteristics, to ensure a comprehensive evaluation of ASR model performance.
|
116 |
|