Update README.md
Browse files
README.md
CHANGED
@@ -16,16 +16,84 @@ size_categories:
|
|
16 |
# path: test_freq/*, metadata.jsonl
|
17 |
---
|
18 |
|
|
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
21 |
This dataset includes manually annotated metadata linking audio files to transcriptions, emotions, and other attributes. The `test_freq.parquet` file contains these links and metadata.
|
22 |
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
|
|
|
16 |
# path: test_freq/*, metadata.jsonl
|
17 |
---
|
18 |
|
19 |
+
## Dataset Description
|
20 |
|
21 |
+
- **Homepage:** https://multidialog.github.io
|
22 |
+
- **Repository:** https://github.com/MultiDialog/MultiDialog
|
23 |
+
- **Paper:** https://arxiv.org/abs/2106.06909
|
24 |
+
- **Point of Contact:** [[email protected]](mailto:[email protected]), [[email protected]](mailto:[email protected])
|
25 |
+
|
26 |
+
## Dataset Description
|
27 |
This dataset includes manually annotated metadata linking audio files to transcriptions, emotions, and other attributes. The `test_freq.parquet` file contains these links and metadata.
|
28 |
|
29 |
+
### Example Usage
|
30 |
+
There are 'train', 'test_freq', 'test_rare', 'valid_freq', and 'valid_rare' splits. Below is the example usage.
|
31 |
+
|
32 |
+
```python
|
33 |
+
from datasets import load_dataset
|
34 |
+
|
35 |
+
gs = load_dataset("speechcolab/gigaspeech", "valid_freq", use_auth_token=True)
|
36 |
+
|
37 |
+
# see structure
|
38 |
+
print(gs)
|
39 |
+
|
40 |
+
# load audio sample on the fly
|
41 |
+
audio_input = gs["valid_freq"][0]["audio"] # first decoded audio sample
|
42 |
+
transcription = gs["valid_freq"][0]["text"] # first transcription
|
43 |
+
```
|
44 |
+
|
45 |
+
|
46 |
+
### Supported Tasks
|
47 |
+
- `multimodal dialogue generation`
|
48 |
+
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
|
49 |
+
- `text-to-speech`: The dataset can also be used to train a model for Text-To-Speech (TTS).
|
50 |
+
|
51 |
+
### Languages
|
52 |
+
Multidialog contains audio and transcription data in English.
|
53 |
+
|
54 |
+
## Dataset Structure
|
55 |
+
|
56 |
+
### Data Instances
|
57 |
+
|
58 |
+
```python
|
59 |
+
{
|
60 |
+
'segment_id': 'YOU0000000315_S0000660',
|
61 |
+
'speaker': 'N/A',
|
62 |
+
'text': "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>",
|
63 |
+
'audio':
|
64 |
+
{
|
65 |
+
# in streaming mode 'path' will be 'xs_chunks_0000/YOU0000000315_S0000660.wav'
|
66 |
+
'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/9d48cf31/xs_chunks_0000/YOU0000000315_S0000660.wav',
|
67 |
+
'array': array([0.0005188 , 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621], dtype=float32),
|
68 |
+
'sampling_rate': 16000
|
69 |
+
},
|
70 |
+
'begin_time': 2941.889892578125,
|
71 |
+
'end_time': 2945.070068359375,
|
72 |
+
'audio_id': 'YOU0000000315',
|
73 |
+
'title': 'Return to Vasselheim | Critical Role: VOX MACHINA | Episode 43',
|
74 |
+
'url': 'https://www.youtube.com/watch?v=zr2n1fLVasU',
|
75 |
+
'source': 2,
|
76 |
+
'category': 24,
|
77 |
+
'original_full_path': 'audio/youtube/P0004/YOU0000000315.opus'
|
78 |
+
}
|
79 |
+
|
80 |
+
```
|
81 |
+
|
82 |
+
### Data Fields
|
83 |
+
|
84 |
+
* conv_id (string) - unique identifier for each conversation.
|
85 |
+
* utterance_id (float) - uterrance index.
|
86 |
+
* from (string) - who the message is from (human, gpt).
|
87 |
+
* audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate.
|
88 |
+
In non-streaming mode (default), the path point to the locally extracted audio. In streaming mode, the path is the relative path of an audio.
|
89 |
+
segment inside its archive (as files are not downloaded and extracted locally).
|
90 |
+
* value (string) - transcription of the utterance.
|
91 |
+
* emotion (string) - the emotion of the utterance.
|
92 |
+
* original_full_path (string) - the relative path to the original full audio sample in the original data directory.
|
93 |
+
|
94 |
+
Emotion is assigned from the following labels:
|
95 |
+
"Neutral", "Happy", "Fear", "Angry", "Disgusting", "Surprising", "Sad"
|
96 |
+
|
97 |
+
|
98 |
|
99 |
|