Datasets:

ArXiv:
License:
File size: 4,502 Bytes
df35a63
1accd08
 
 
 
 
 
2e5f8d3
1accd08
 
2e5f8d3
b92c1dd
1accd08
 
 
c7a2c9d
2e5f8d3
 
 
 
 
 
27a07e2
 
 
 
 
 
9c34a7a
27a07e2
1609559
 
 
 
 
b24b379
 
 
 
 
 
 
 
 
1609559
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b24b379
 
1609559
 
 
 
 
 
 
 
d55bb57
1609559
 
 
 
 
d55bb57
1609559
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27a07e2
1609559
27a07e2
1accd08
 
27a07e2
1accd08
 
 
 
 
 
 
 
 
 
 
 
 
1609559
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
- fr
- pl
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: evi-multilingual-spoken-dialogue-tasks-and-1
language_bcp47:
- en
- en-GB
- fr
- fr-FR
- pl
---

# EVI

## Dataset Description
- **Paper:** [EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification](https://arxiv.org/abs/2204.13496)
- **Repository:** [Github](https://github.com/PolyAI-LDN/evi-paper)

EVI is a challenging spoken multilingual dataset
with 5,506 dialogues in English, Polish, and French
that can be used for benchmarking and developing
knowledge-based enrolment, identification, and identification for spoken dialogue systems.

## Example
EVI can be downloaded and used as follows:

```py
from datasets import load_dataset
evi = load_dataset("PolyAI/evi", "en-GB") # for British English

# to download data from all locales use:
# evi = load_dataset("PolyAI/evi", "all")

# see structure
print(evi)
```

## Dataset Structure

We show detailed information of the example for the `en-GB` configuration of the dataset.
All other configurations have the same structure.

### Data Instances

An example of a data instance of the config `en-GB` looks as follows:

```
{
    "language": 0,
    "dialogue_id": "CA0007220161df7be23f4554704c8720f5",
    "speaker_id": "e80e9bdd33eda593f16a1b6f2fb228ff",
    "turn_id": 0,
    "target_profile_id": "en.GB.608",
    "asr_transcription": "w20 a b",
    "asr_nbest'": ["w20 a b", "w20 a bee", "w20 a baby"],
    "path": "audios/en/CA0007220161df7be23f4554704c8720f5/0.wav",
    "audio": {
        "path": "/home/georgios/.cache/huggingface/datasets/downloads/extracted/0335ebc25feace53243133b49ba17ba18e26f0f97cb083ffdf4e73dd7427b443/audios/en/CA0007220161df7be23f4554704c8720f5/0.wav",
        "array": array([ 0.00024414,  0.00024414,  0.00024414, ...,  0.00024414,
       -0.00024414,  0.00024414], dtype=float32),
       "sampling_rate": 8000,
    }
}
```

### Data Fields
The data fields are the same among all splits.
- **language** (int): ID of language
- **dialogue_id** (str): the ID of the dialogue
- **speaker_id** (str): the ID of the speaker
- **turn_id** (int)": the ID of the turn
- **target_profile_id** (str): the ID of the target profile
- **asr_transcription** (str): ASR transcription of the audio file
- **asr_nbest** (list): n-best ASR transcriptions of the audio file
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path of audio


### Data Splits
Every config only has the `"test"` split containing *ca.* 1,800 dialogues.

## Dataset Creation

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Discussion of Biases

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Other Known Limitations

[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

## Additional Information

### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

### Licensing Information

All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).

### Citation Information

```
@inproceedings{Spithourakis2022evi,
    author      = {Georgios P. Spithourakis and
                   Ivan Vuli\'{c} and
                   Micha\l{} Lis and
                   I\~{n}igo Casanueva
                   and Pawe\l{} Budzianowski},
    title       = {{EVI}: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification},
    year        = {2022},
    note        = {Data available at https://github.com/PolyAI-LDN/evi-paper},
    url         = {https://arxiv.org/abs/2204.13496},
    booktitle   = {Findings of NAACL (publication pending)}
}
```

### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for helping with adding this dataset