Commit
·
6e9a65d
1
Parent(s):
3b9ec1a
Update README.md
Browse files
README.md
CHANGED
@@ -78,6 +78,17 @@ model-index:
|
|
78 |
- name: Test WER
|
79 |
type: wer
|
80 |
value: 8.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
- task:
|
82 |
type: Automatic Speech Recognition
|
83 |
name: automatic-speech-recognition
|
@@ -201,8 +212,6 @@ The list of the available models in this collection is shown in the following ta
|
|
201 |
| Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | WSJ Eval92 | WSJ Dev93 | NSC Part 1 | MLS Test | MLS Dev | MCV Test 6.1 |Train Dataset |
|
202 |
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-------|------|-----|-------|---------|
|
203 |
| 1.6.0 | SentencePiece Unigram | 128 | 4.3 | 2.2 | 2.0 | 2.9 | 7.0 | 7.2 | 6.5 | 8.0 | NeMo ASRSET 2.0 |
|
204 |
-
| 1.0.0 | SentencePiece Unigram | 128 | 5.4 | 2.5 | 2.1 | 3.0 | 7.9 | - | - | - | NeMo ASRSET 1.4.1 |
|
205 |
-
| rc1.0.0 | WordPiece | 128 | 6.3 | 2.7 | - | - | - | - | - | - | LibriSpeech |
|
206 |
|
207 |
|
208 |
You may use language models to improve the accuracy of the models. The WER(%) of the latest model with different language modeling techniques are reported in the follwoing table.
|
|
|
78 |
- name: Test WER
|
79 |
type: wer
|
80 |
value: 8.0
|
81 |
+
- task:
|
82 |
+
type: Automatic Speech Recognition
|
83 |
+
name: automatic-speech-recognition
|
84 |
+
dataset:
|
85 |
+
name: Mozilla Common Voice 8.0
|
86 |
+
type: mozilla-foundation/common_voice_8_0
|
87 |
+
args: en
|
88 |
+
metrics:
|
89 |
+
- name: Test WER
|
90 |
+
type: wer
|
91 |
+
value: 9.48
|
92 |
- task:
|
93 |
type: Automatic Speech Recognition
|
94 |
name: automatic-speech-recognition
|
|
|
212 |
| Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | WSJ Eval92 | WSJ Dev93 | NSC Part 1 | MLS Test | MLS Dev | MCV Test 6.1 |Train Dataset |
|
213 |
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-------|------|-----|-------|---------|
|
214 |
| 1.6.0 | SentencePiece Unigram | 128 | 4.3 | 2.2 | 2.0 | 2.9 | 7.0 | 7.2 | 6.5 | 8.0 | NeMo ASRSET 2.0 |
|
|
|
|
|
215 |
|
216 |
|
217 |
You may use language models to improve the accuracy of the models. The WER(%) of the latest model with different language modeling techniques are reported in the follwoing table.
|