Update README.md
Browse files
README.md
CHANGED
@@ -42,7 +42,7 @@ model-index:
|
|
42 |
# wav2vec 2.0 with CTC trained on CommonVoice Spanish (No LM)
|
43 |
|
44 |
This repository provides all the necessary tools to perform automatic speech
|
45 |
-
recognition from an end-to-end system pretrained on CommonVoice (
|
46 |
SpeechBrain. For a better experience, we encourage you to learn more about
|
47 |
[SpeechBrain](https://speechbrain.github.io).
|
48 |
|
@@ -56,8 +56,8 @@ The performance of the model is the following:
|
|
56 |
|
57 |
This ASR system is composed of 2 different but linked blocks:
|
58 |
- Tokenizer (char) that transforms words into chars and trained with
|
59 |
-
the train transcriptions (train.tsv) of CommonVoice (
|
60 |
-
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([wav2vec2-large-xlsr-53-
|
61 |
The obtained final acoustic representation is given to the CTC decoder.
|
62 |
|
63 |
The system is trained with recordings sampled at 16kHz (single channel).
|
@@ -74,20 +74,18 @@ pip install speechbrain transformers
|
|
74 |
Please notice that we encourage you to read our tutorials and learn more about
|
75 |
[SpeechBrain](https://speechbrain.github.io).
|
76 |
|
77 |
-
### Transcribing your own audio files (in
|
78 |
|
79 |
```python
|
80 |
from speechbrain.pretrained import EncoderASR
|
81 |
|
82 |
-
asr_model = EncoderASR.from_hparams(source="
|
83 |
-
asr_model.transcribe_file("
|
84 |
|
85 |
```
|
86 |
### Inference on GPU
|
87 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
88 |
|
89 |
-
## Parallel Inference on a Batch
|
90 |
-
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
|
91 |
|
92 |
### Training
|
93 |
The model was trained with SpeechBrain.
|
@@ -106,11 +104,9 @@ pip install -e .
|
|
106 |
3. Run Training:
|
107 |
```bash
|
108 |
cd recipes/CommonVoice/ASR/seq2seq
|
109 |
-
python train.py hparams/
|
110 |
```
|
111 |
|
112 |
-
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/19G2Zm8896QSVDqVfs7PS_W86-K0-5xeC?usp=sharing).
|
113 |
-
|
114 |
### Limitations
|
115 |
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
|
116 |
|
|
|
42 |
# wav2vec 2.0 with CTC trained on CommonVoice Spanish (No LM)
|
43 |
|
44 |
This repository provides all the necessary tools to perform automatic speech
|
45 |
+
recognition from an end-to-end system pretrained on CommonVoice (Spanish Language) within
|
46 |
SpeechBrain. For a better experience, we encourage you to learn more about
|
47 |
[SpeechBrain](https://speechbrain.github.io).
|
48 |
|
|
|
56 |
|
57 |
This ASR system is composed of 2 different but linked blocks:
|
58 |
- Tokenizer (char) that transforms words into chars and trained with
|
59 |
+
the train transcriptions (train.tsv) of CommonVoice (ES).
|
60 |
+
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([wav2vec2-large-xlsr-53-spanish](https://huggingface.co/facebook/wav2vec2-large-xlsr-53-spanish)) is combined with two DNN layers and finetuned on CommonVoice DE.
|
61 |
The obtained final acoustic representation is given to the CTC decoder.
|
62 |
|
63 |
The system is trained with recordings sampled at 16kHz (single channel).
|
|
|
74 |
Please notice that we encourage you to read our tutorials and learn more about
|
75 |
[SpeechBrain](https://speechbrain.github.io).
|
76 |
|
77 |
+
### Transcribing your own audio files (in Spanish)
|
78 |
|
79 |
```python
|
80 |
from speechbrain.pretrained import EncoderASR
|
81 |
|
82 |
+
asr_model = EncoderASR.from_hparams(source="Voyager1/asr-wav2vec2-commonvoice-es", savedir="pretrained_models/asr-wav2vec2-commonvoice-es")
|
83 |
+
asr_model.transcribe_file("Voyager1/asr-wav2vec2-commonvoice-es/example-es.wav")
|
84 |
|
85 |
```
|
86 |
### Inference on GPU
|
87 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
88 |
|
|
|
|
|
89 |
|
90 |
### Training
|
91 |
The model was trained with SpeechBrain.
|
|
|
104 |
3. Run Training:
|
105 |
```bash
|
106 |
cd recipes/CommonVoice/ASR/seq2seq
|
107 |
+
python train.py hparams/train_es_with_wav2vec.yaml --data_folder=your_data_folder
|
108 |
```
|
109 |
|
|
|
|
|
110 |
### Limitations
|
111 |
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
|
112 |
|