Transformers
PyTorch
wav2vec2
pretraining
speech
xls_r
xls_r_pretrained
Inference Endpoints
patrickvonplaten commited on
Commit
b1196bf
·
1 Parent(s): 618f1bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -2
README.md CHANGED
@@ -13,8 +13,6 @@ license: apache-2.0
13
 
14
  ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/xls_r.png)
15
 
16
- ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/xls_r.png)
17
-
18
  [Facebook's Wav2Vec2 XLS-R](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) counting **300 million** parameters.
19
 
20
  XLS-R is Facebook AI's large-scale multilingual pretrained model for speech (the "XLM-R for Speech"). It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages. When using the model make sure that your speech input is sampled at 16kHz.
 
13
 
14
  ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/xls_r.png)
15
 
 
 
16
  [Facebook's Wav2Vec2 XLS-R](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) counting **300 million** parameters.
17
 
18
  XLS-R is Facebook AI's large-scale multilingual pretrained model for speech (the "XLM-R for Speech"). It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages. When using the model make sure that your speech input is sampled at 16kHz.