mzboito commited on
Commit
375f8f6
·
verified ·
1 Parent(s): f4a5d6d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -14,14 +14,21 @@ metrics:
14
  pipeline_tag: automatic-speech-recognition
15
  ---
16
 
17
- **This is a light CTC-based Automatic Speech Recognition system for French.**
18
  This model is part of the SLU demo available here: [LINK TO THE DEMO GOES HERE]
19
 
20
  This is a [mHuBERT-147](https://huggingface.co/utter-project/mHuBERT-147) ASR fine-tuned model.
21
 
22
  * Training data: 123 hours (84,707 utterances)
23
  * Normalization: Whisper normalization
24
- * Performance:
 
 
 
 
 
 
 
25
 
26
  | | **dev WER** | **dev CER** | **test WER** | **test CER** |
27
  |:------------------:|:-----------:|:-----------:|:------------:|:------------:|
@@ -29,11 +36,6 @@ This is a [mHuBERT-147](https://huggingface.co/utter-project/mHuBERT-147) ASR fi
29
  | **fleurs102** | 20.0 | 7.0 | 22.0 | 7.7 |
30
  | **CommonVoice 17** | 16.0 | 4.9 | 19.0 | 6.5 |
31
 
32
- # Table of Contents:
33
- 1. [Training Parameters](https://huggingface.co/naver/mHuBERT-147-ASR-fr#training-parameters)
34
- 2. [ASR Model class](https://huggingface.co/naver/mHuBERT-147-ASR-fr#asr-model-class)
35
- 3. [Running inference](https://huggingface.co/naver/mHuBERT-147-ASR-fr#running-inference)
36
-
37
  ## Training Parameters
38
  The training parameters are available in [config.yaml](https://huggingface.co/naver/mHuBERT-147-ASR-fr/blob/main/config.yaml).
39
  We highlight the use of 0.3 for hubert.final_dropout, which we found to be very helpful in convergence. We also use fp32 training, as we found fp16 training to be unstable.
 
14
  pipeline_tag: automatic-speech-recognition
15
  ---
16
 
17
+ **This is a small CTC-based Automatic Speech Recognition system for French.**
18
  This model is part of the SLU demo available here: [LINK TO THE DEMO GOES HERE]
19
 
20
  This is a [mHuBERT-147](https://huggingface.co/utter-project/mHuBERT-147) ASR fine-tuned model.
21
 
22
  * Training data: 123 hours (84,707 utterances)
23
  * Normalization: Whisper normalization
24
+
25
+ # Table of Contents:
26
+ 1. [Performance](https://huggingface.co/naver/mHuBERT-147-ASR-fr#performance)
27
+ 2. [Training Parameters](https://huggingface.co/naver/mHuBERT-147-ASR-fr#training-parameters)
28
+ 3. [ASR Model class](https://huggingface.co/naver/mHuBERT-147-ASR-fr#asr-model-class)
29
+ 4. [Running inference](https://huggingface.co/naver/mHuBERT-147-ASR-fr#running-inference)
30
+
31
+ ## Performance
32
 
33
  | | **dev WER** | **dev CER** | **test WER** | **test CER** |
34
  |:------------------:|:-----------:|:-----------:|:------------:|:------------:|
 
36
  | **fleurs102** | 20.0 | 7.0 | 22.0 | 7.7 |
37
  | **CommonVoice 17** | 16.0 | 4.9 | 19.0 | 6.5 |
38
 
 
 
 
 
 
39
  ## Training Parameters
40
  The training parameters are available in [config.yaml](https://huggingface.co/naver/mHuBERT-147-ASR-fr/blob/main/config.yaml).
41
  We highlight the use of 0.3 for hubert.final_dropout, which we found to be very helpful in convergence. We also use fp32 training, as we found fp16 training to be unstable.