Commit
·
954ede7
1
Parent(s):
74057b1
Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,8 @@ Fine-tuned (more precisely, continue trained) 50k steps model on Japanese using
|
|
13 |
Original repos, Many thanks!:
|
14 |
[S3PRL](https://github.com/s3prl/s3prl/tree/main/s3prl/pretrain)
|
15 |
- Using this when training (with little modify for train using own datasets).
|
16 |
-
|
|
|
17 |
|
18 |
|
19 |
Note: As same as the original, this model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
|
|
|
13 |
Original repos, Many thanks!:
|
14 |
[S3PRL](https://github.com/s3prl/s3prl/tree/main/s3prl/pretrain)
|
15 |
- Using this when training (with little modify for train using own datasets).
|
16 |
+
|
17 |
+
[distilhubert (hf)](https://huggingface.co/ntu-spml/distilhubert)
|
18 |
|
19 |
|
20 |
Note: As same as the original, this model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
|