bofenghuang
commited on
Commit
•
67740f0
1
Parent(s):
544f700
up
Browse files
README.md
CHANGED
@@ -154,3 +154,15 @@ This model has been trained on a composite dataset comprising over 2500 hours of
|
|
154 |
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
|
155 |
|
156 |
The model exclusively generates the lowercase French alphabet, hyphen, and apostrophe. Therefore, it may not perform well in situations where uppercase characters and additional punctuation are also required.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
154 |
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
|
155 |
|
156 |
The model exclusively generates the lowercase French alphabet, hyphen, and apostrophe. Therefore, it may not perform well in situations where uppercase characters and additional punctuation are also required.
|
157 |
+
|
158 |
+
## References
|
159 |
+
|
160 |
+
[1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
|
161 |
+
|
162 |
+
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
|
163 |
+
|
164 |
+
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
165 |
+
|
166 |
+
## Acknowledgements
|
167 |
+
|
168 |
+
Thanks to Nvidia's research on the advanced model architecture and the NeMo team's training framework.
|