RaphaelOlivier
commited on
Commit
•
7867261
1
Parent(s):
ef8f85a
add instructions
Browse files
README.md
ADDED
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
This dataset is a subset of [https://huggingface.co/datasets/librispeech_asr](LibriSpeech) that has been adversarially modified. It is designed to fool ASR models into predicting a target of our choosing instead of the correct output.
|
2 |
+
|
3 |
+
The dataset contains several splits. Each split consists of the same utterances, modified with different types and amount of noise. 3 noises have been used:
|
4 |
+
* Adversarial noise of radius $\epsilon=0.04$ (`adv_0.04` split)
|
5 |
+
* Adversarial noise of radius $\epsilon=0.015$ (`adv_0.015` split)
|
6 |
+
* Adversarial noise of radius $\epsilon=0.015$ combined with Room Impulse Response (RIR) noise (`adv_0.015_RIR` split)
|
7 |
+
|
8 |
+
In addition we provide the original inputs (`natural` split)
|
9 |
+
|
10 |
+
For each noise we actually provide a split labeled with the original or "natural" correct transcriptions, and one labeled with our selected "target" transcriptions. For instance, the split contains the modified original transcription, while the An ASR model that this dataset fools would get a low target WER and a high natural WER. An ASR model robust to this dataset would get a low natural WER and a high target WER. Therefore we provide in total 8 splits:
|
11 |
+
* `natural_nat_txt`
|
12 |
+
* `natural_tgt_txt`
|
13 |
+
* `adv_0.04_nat_txt`
|
14 |
+
* `adv_0.04_tgt_txt`
|
15 |
+
* `adv_0.015_nat_txt`
|
16 |
+
* `adv_0.015_tgt_txt`
|
17 |
+
* `adv_0.015_RIR_nat_txt`
|
18 |
+
* `adv_0.015_RIR_tgt_txt`
|
19 |
+
|
20 |
+
You should evaluate your model on this dataset as you would evaluate it on LibriSpeech. Here is an example with Wav2Vec2
|
21 |
+
|
22 |
+
|
23 |
+
```python
|
24 |
+
from datasets import load_dataset
|
25 |
+
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
26 |
+
import torch
|
27 |
+
from jiwer import wer
|
28 |
+
|
29 |
+
|
30 |
+
librispeech_adv_eval = load_dataset("RaphaelOlivier/librispeech_asr_adversarial", "adv", split="adv_0.15_adv_txt")
|
31 |
+
|
32 |
+
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
|
33 |
+
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
|
34 |
+
|
35 |
+
def map_to_pred(batch):
|
36 |
+
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
|
37 |
+
with torch.no_grad():
|
38 |
+
logits = model(input_values.to("cuda")).logits
|
39 |
+
|
40 |
+
predicted_ids = torch.argmax(logits, dim=-1)
|
41 |
+
transcription = processor.batch_decode(predicted_ids)
|
42 |
+
batch["transcription"] = transcription
|
43 |
+
return batch
|
44 |
+
|
45 |
+
result = librispeech_adv_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
|
46 |
+
|
47 |
+
print("WER:", wer(result["text"], result["transcription"]))
|
48 |
+
```
|
49 |
+
|
50 |
+
*Result (WER)*:
|
51 |
+
|
52 |
+
| "0.015 target text" | "0.015 natural text" | "0.04 target text" | "0.04 natural text"
|
53 |
+
|---|---|---|---|
|
54 |
+
| 58.2 | 108 | 58.2 | 49.5 |
|
55 |
+
|