mpoyraz commited on
Commit
631a808
1 Parent(s): 7b635e6

add the model card

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md CHANGED
@@ -1,3 +1,49 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ language: tr
4
+ tags:
5
+ - automatic-speech-recognition
6
+ - mozilla-foundation/common_voice_7_0
7
+ - tr
8
+ - robust-speech-event
9
+ datasets:
10
+ - mozilla-foundation/common_voice_7_0
11
  ---
12
+
13
+ # wav2vec2-xls-r-300m-cv7-turkish
14
+
15
+ ## Model description
16
+ This ASR model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Turkish language.
17
+
18
+ ## Training and evaluation data
19
+ The following datasets were used for finetuning:
20
+ - [Common Voice 7.0 TR](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) All `validated` split except `test` split was used for training.
21
+ - [MediaSpeech](https://www.openslr.org/108/)
22
+
23
+ ## Training procedure
24
+ To support both of the datasets above, custom pre-processing and loading steps was performed and [wav2vec2-turkish](https://github.com/mpoyraz/wav2vec2-turkish) repo was used for that purpose.
25
+
26
+ ### Training hyperparameters
27
+ The following hypermaters were used for finetuning:
28
+ - learning_rate 2e-4
29
+ - num_train_epochs 10
30
+ - warmup_steps 500
31
+ - freeze_feature_extractor
32
+ - mask_time_prob 0.1
33
+ - mask_feature_prob 0.05
34
+ - feat_proj_dropout 0.05
35
+ - attention_dropout 0.05
36
+ - final_dropout 0.05
37
+ - activation_dropout 0.05
38
+ - per_device_train_batch_size 8
39
+ - per_device_eval_batch_size 8
40
+ - gradient_accumulation_steps 8
41
+
42
+ ### Framework versions
43
+ - Transformers 4.16.0.dev0
44
+ - Pytorch 1.10.1
45
+ - Datasets 1.17.0
46
+ - Tokenizers 0.10.3
47
+
48
+ ## Language Model
49
+ N-gram language model is trained on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format.