File size: 2,885 Bytes
f965ed7
cff5670
 
 
 
 
 
 
 
 
10a1d5c
cff5670
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115b3cf
cff5670
 
 
 
 
 
 
 
eafca92
03187db
 
 
 
 
 
 
 
 
 
eafca92
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
## Dataset Name: ASR transcripts of IEMOCAP for ASR error correction and emotion recognition



## Description

This dataset consists of ASR transcripts of 11 speech models, following the turns of the conversation in IEMOCAP, with corresponding speaker ID and utterance ID.

To acquire this dataset, please obtain the license of IEMOCAP first (if you already have it, please skip step 1). Specifically:

1. Submit a request to SAIL lab at USC following their guidance: [link of their webpage](https://sail.usc.edu/iemocap/iemocap_release.htm). All you have to do is read their license and fill out a Google form, which is pretty easy.
2. When registering this challenge, attach the approved license or screenshot of the approval email as proof. We will then release the data to you.


The explanation for each key is as follows:

- `need_prediction`: this key indicates whether this utterance should be included in the prediction procedure. "yes" denotes the utterances labeled with Big4 emotions, which are widely used for emotion recognition in IEMOCAP. "no" denotes all other utterances. Note that we have removed the utterances that have no human annotations.

- `emotion`: this key indicates the emotion label of the utterance.

- `id`: this key indicates the utterance ID, which is also the name of the audio file in IEMOCAP corpus. The ID is exactly the same as the raw ID in IEMOCAP.

- `speaker`: this key indicates the speaker of the utterance. Since there are two speakers in each session, there are ten speakers in total. It's important to note that the sixth character of the id DOES NOT represent the gender of the speaker, but rather the gender of the person currently wearing the motion capture device. Please use our provided speaker as the speaker ID.

- `groundtruth`: this key indicates the original human transcription provided by IEMOCAP.

The remaining ten keys indicate the ASR transcription generated by respective ASR model.


## Access

The dataset will be shared to you after you have registered.


## Acknowledgments

This dataset is created based on IEMOCAP. Thanks to the original authors of IEMOCAP and appreciate the approval of Prof. Shrikanth Narayanan.


## References
```
@article{busso2008iemocap,
  title={IEMOCAP: Interactive emotional dyadic motion capture database},
  author={Busso, Carlos and Bulut, Murtaza and Lee, Chi-Chun and Kazemzadeh, Abe and Mower, Emily and Kim, Samuel and Chang, Jeannette N and Lee, Sungbok and Narayanan, Shrikanth S},
  journal={Language resources and evaluation},
  volume={42},
  pages={335--359},
  year={2008},
  publisher={Springer}
}

@article{li2024speech,
  title={Speech Emotion Recognition with ASR Transcripts: A Comprehensive Study on Word Error Rate and Fusion Techniques},
  author={Li, Yuanchao and Bell, Peter and Lai, Catherine},
  journal={arXiv preprint arXiv:2406.08353},
  year={2024}
}
```