The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
audio
audio | label
class label |
---|---|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
|
0spk1
|
This corpus contains paired data of speech, articulatory movements and phonemes. There are 38 speakers in the corpus, each with 460 utterances.
The raw audio files are in audios.zip. The ema data and preprocessed data is stored in processed.zip. The processed data can be loaded with pytorch and has the following keys -
- ema_raw : The raw ema data
- ema_clipped : The ema data after trimming using being-end time stamps
- ema_trimmed_and_normalised_with_6_articulators: The ema data after trimming using being-end time stamps, followed by articulatory specifc standardisation
- mfcc: 13-dim MFCC computed on trimmed audio
- phonemes: The phonemes uttered for the audio
- durations: Duration values for each phoneme
- begin_end: Begin end time stamps to trim the audio / raw ema
If you have used this dataset in your work, use the following refrence to cite the dataset -
Bandekar, J., Udupa, S., Ghosh, P.K. (2024) Articulatory synthesis using representations learnt through phonetic label-aware contrastive loss. Proc. Interspeech 2024, 427-431, doi: 10.21437/Interspeech.2024-1756
- Downloads last month
- 37