Update README.md
Browse files
README.md
CHANGED
@@ -71,7 +71,7 @@ First of all, please install tranformers and SpeechBrain with the following comm
|
|
71 |
pip install speechbrain transformers
|
72 |
```
|
73 |
|
74 |
-
Please notice that we encourage you to read
|
75 |
[SpeechBrain](https://speechbrain.github.io).
|
76 |
|
77 |
### Transcribing your own audio files (in Spanish)
|
@@ -87,38 +87,11 @@ asr_model.transcribe_file("Voyager1/asr-wav2vec2-commonvoice-es/example-es.wav")
|
|
87 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
88 |
|
89 |
|
90 |
-
### Training
|
91 |
-
The model was trained with SpeechBrain.
|
92 |
-
To train it from scratch follow these steps:
|
93 |
-
1. Clone SpeechBrain:
|
94 |
-
```bash
|
95 |
-
git clone https://github.com/speechbrain/speechbrain/
|
96 |
-
```
|
97 |
-
2. Install it:
|
98 |
-
```bash
|
99 |
-
cd speechbrain
|
100 |
-
pip install -r requirements.txt
|
101 |
-
pip install -e .
|
102 |
-
```
|
103 |
-
|
104 |
-
3. Run Training:
|
105 |
-
```bash
|
106 |
-
cd recipes/CommonVoice/ASR/seq2seq
|
107 |
-
python train.py hparams/train_es_with_wav2vec.yaml --data_folder=your_data_folder
|
108 |
-
```
|
109 |
-
|
110 |
### Limitations
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
# **About SpeechBrain**
|
115 |
-
- Website: https://speechbrain.github.io/
|
116 |
-
- Code: https://github.com/speechbrain/speechbrain/
|
117 |
-
- HuggingFace: https://huggingface.co/speechbrain/
|
118 |
|
119 |
|
120 |
-
# **
|
121 |
-
Please, cite SpeechBrain if you use it for your research or business.
|
122 |
|
123 |
```bibtex
|
124 |
|
|
|
71 |
pip install speechbrain transformers
|
72 |
```
|
73 |
|
74 |
+
Please notice that we encourage you to read tutorials and learn more about
|
75 |
[SpeechBrain](https://speechbrain.github.io).
|
76 |
|
77 |
### Transcribing your own audio files (in Spanish)
|
|
|
87 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
88 |
|
89 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
90 |
### Limitations
|
91 |
+
We do not provide any warranty on the performance achieved by this model when used on other datasets.
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
|
94 |
+
# **Citations**
|
|
|
95 |
|
96 |
```bibtex
|
97 |
|