cchoi1022 commited on
Commit
f203c0b
·
1 Parent(s): 01d6921

Upload lm-boosted decoder

Browse files
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - librispeech_asr
5
+ tags:
6
+ - speech
7
+ - audio
8
+ - automatic-speech-recognition
9
+ - hf-asr-leaderboard
10
+ license: apache-2.0
11
+ model-index:
12
+ - name: wav2vec2-large-960h-lv60
13
+ results:
14
+ - task:
15
+ name: Automatic Speech Recognition
16
+ type: automatic-speech-recognition
17
+ dataset:
18
+ name: LibriSpeech (clean)
19
+ type: librispeech_asr
20
+ config: clean
21
+ split: test
22
+ args:
23
+ language: en
24
+ metrics:
25
+ - name: Test WER
26
+ type: wer
27
+ value: 1.9
28
+ - task:
29
+ name: Automatic Speech Recognition
30
+ type: automatic-speech-recognition
31
+ dataset:
32
+ name: LibriSpeech (other)
33
+ type: librispeech_asr
34
+ config: other
35
+ split: test
36
+ args:
37
+ language: en
38
+ metrics:
39
+ - name: Test WER
40
+ type: wer
41
+ value: 3.9
42
+ ---
43
+
44
+ # Wav2Vec2-Large-960h-Lv60 + Self-Training
45
+
46
+ [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
47
+
48
+ The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
49
+
50
+ [Paper](https://arxiv.org/abs/2006.11477)
51
+
52
+ Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
53
+
54
+ **Abstract**
55
+
56
+ We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
57
+
58
+ The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
59
+
60
+
61
+ # Usage
62
+
63
+ To transcribe audio files the model can be used as a standalone acoustic model as follows:
64
+
65
+ ```python
66
+ from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
67
+ from datasets import load_dataset
68
+ import torch
69
+
70
+ # load model and processor
71
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
72
+ model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
73
+
74
+ # load dummy dataset and read soundfiles
75
+ ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
76
+
77
+ # tokenize
78
+ input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
79
+
80
+ # retrieve logits
81
+ logits = model(input_values).logits
82
+
83
+ # take argmax and decode
84
+ predicted_ids = torch.argmax(logits, dim=-1)
85
+ transcription = processor.batch_decode(predicted_ids)
86
+ ```
87
+
88
+ ## Evaluation
89
+
90
+ This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60-self** on LibriSpeech's "clean" and "other" test data.
91
+
92
+ ```python
93
+ from datasets import load_dataset
94
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
95
+ import torch
96
+ from jiwer import wer
97
+
98
+
99
+ librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
100
+
101
+ model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda")
102
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
103
+
104
+ def map_to_pred(batch):
105
+ inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
106
+ input_values = inputs.input_values.to("cuda")
107
+ attention_mask = inputs.attention_mask.to("cuda")
108
+
109
+ with torch.no_grad():
110
+ logits = model(input_values, attention_mask=attention_mask).logits
111
+
112
+ predicted_ids = torch.argmax(logits, dim=-1)
113
+ transcription = processor.batch_decode(predicted_ids)
114
+ batch["transcription"] = transcription
115
+ return batch
116
+
117
+ result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
118
+
119
+ print("WER:", wer(result["text"], result["transcription"]))
120
+ ```
121
+
122
+ *Result (WER)*:
123
+
124
+ | "clean" | "other" |
125
+ |---|---|
126
+ | 1.9 | 3.9 |
config.json ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "facebook/wav2vec2-large-960h-lv60-self",
3
+ "activation_dropout": 0.1,
4
+ "apply_spec_augment": true,
5
+ "architectures": [
6
+ "Wav2Vec2ForCTC"
7
+ ],
8
+ "attention_dropout": 0.1,
9
+ "bos_token_id": 1,
10
+ "codevector_dim": 256,
11
+ "contrastive_logits_temperature": 0.1,
12
+ "conv_bias": true,
13
+ "conv_dim": [
14
+ 512,
15
+ 512,
16
+ 512,
17
+ 512,
18
+ 512,
19
+ 512,
20
+ 512
21
+ ],
22
+ "conv_kernel": [
23
+ 10,
24
+ 3,
25
+ 3,
26
+ 3,
27
+ 3,
28
+ 2,
29
+ 2
30
+ ],
31
+ "conv_stride": [
32
+ 5,
33
+ 2,
34
+ 2,
35
+ 2,
36
+ 2,
37
+ 2,
38
+ 2
39
+ ],
40
+ "ctc_loss_reduction": "sum",
41
+ "ctc_zero_infinity": false,
42
+ "diversity_loss_weight": 0.1,
43
+ "do_stable_layer_norm": true,
44
+ "eos_token_id": 2,
45
+ "feat_extract_activation": "gelu",
46
+ "feat_extract_dropout": 0.0,
47
+ "feat_extract_norm": "layer",
48
+ "feat_proj_dropout": 0.1,
49
+ "feat_quantizer_dropout": 0.0,
50
+ "final_dropout": 0.1,
51
+ "gradient_checkpointing": false,
52
+ "hidden_act": "gelu",
53
+ "hidden_dropout": 0.1,
54
+ "hidden_dropout_prob": 0.1,
55
+ "hidden_size": 1024,
56
+ "initializer_range": 0.02,
57
+ "intermediate_size": 4096,
58
+ "layer_norm_eps": 1e-05,
59
+ "layerdrop": 0.1,
60
+ "mask_feature_length": 10,
61
+ "mask_feature_prob": 0.0,
62
+ "mask_time_length": 10,
63
+ "mask_time_prob": 0.05,
64
+ "model_type": "wav2vec2",
65
+ "num_attention_heads": 16,
66
+ "num_codevector_groups": 2,
67
+ "num_codevectors_per_group": 320,
68
+ "num_conv_pos_embedding_groups": 16,
69
+ "num_conv_pos_embeddings": 128,
70
+ "num_feat_extract_layers": 7,
71
+ "num_hidden_layers": 24,
72
+ "num_negatives": 100,
73
+ "pad_token_id": 0,
74
+ "proj_codevector_dim": 256,
75
+ "transformers_version": "4.7.0.dev0",
76
+ "vocab_size": 32
77
+ }
feature_extractor_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "feature_dim": 1,
4
+ "padding_side": "right",
5
+ "padding_value": 0.0,
6
+ "return_attention_mask": true,
7
+ "sampling_rate": 16000
8
+ }
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90568e6185400541adead27c34d550df8fde3d35515c314fae28eaabbfe166a1
3
+ size 1261901472
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00b604cf4d28e86559e8adaeb3a186daa89dc37f5ab216771a0a15a26db0de9f
3
+ size 1262055246
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:924217bb609535355134d3da00d37c747177c1366f8a5f296bb4822942cb6add
3
+ size 1262396960