|
--- |
|
license: cc-by-4.0 |
|
--- |
|
|
|
This corpus contains paired data of speech, articulatory movements and phonemes. There are 38 speakers in the corpus, each with 460 utterances. |
|
|
|
The raw audio files are in audios.zip. The ema data and preprocessed data is stored in processed.zip. The processed data can be loaded with pytorch and has the following keys - |
|
|
|
<ul> |
|
<li>ema_raw : The raw ema data |
|
|
|
<li>ema_clipped : The ema data after trimming using being-end time stamps |
|
|
|
<li>ema_trimmed_and_normalised_with_6_articulators: The ema data after trimming using being-end time stamps, followed by articulatory specifc standardisation |
|
|
|
<li>mfcc: 13-dim MFCC computed on trimmed audio |
|
|
|
<li>phonemes: The phonemes uttered for the audio |
|
|
|
<li>durations: Duration values for each phoneme |
|
|
|
<li>begin_end: Begin end time stamps to trim the audio / raw ema |
|
</ul> |
|
To use this data for tasks such as acoustic to articulatory inversion (AAI), you can use ema_trimmed_and_normalised_with_6_articulators and mfcc as the data. |
|
|
|
___ |
|
|
|
If you have used this dataset in your work, use the following refrence to cite the dataset - |
|
|
|
```Bandekar, J., Udupa, S., Ghosh, P.K. (2024) Articulatory synthesis using representations learnt through phonetic label-aware contrastive loss. Proc. Interspeech 2024, 427-431, doi: 10.21437/Interspeech.2024-1756``` |
|
|