|
arXiv:2112.03557v1 [cs.CL] 7 Dec 2021Multi-speaker Emotional Text-to-speech Synthesizer |
|
Sungjae Cho1,†, Soo-Young Lee2 |
|
1Korea Institute of Science and Technology, Republic of Kore a |
|
2Korea Advanced Institute of Science and Technology, Republ ic of Korea |
|
[email protected], [email protected] |
|
Abstract |
|
We present a methodology to train our multi-speaker emotion al |
|
text-to-speech synthesizer that can express speech for 10 s peak- |
|
ers’ 7 different emotions. All silences from audio samples a re |
|
removed prior to learning. This results in fast learning by o ur |
|
model. Curriculum learning is applied to train our model effi - |
|
ciently. Our model is first trained with a large single-speak er |
|
neutral dataset, and then trained with neutral speech from a ll |
|
speakers. Finally, our model is trained using datasets of em o- |
|
tional speech from all speakers. In each stage, training sam ples |
|
of each speaker-emotion pair have equal probability to appe ar |
|
in mini-batches. Through this procedure, our model can synt he- |
|
size speech for all targeted speakers and emotions. Our synt he- |
|
sized audio sets are available on our web page. |
|
Index Terms : emotional speech synthesis, text-to-speech, ma- |
|
chine learning, neural network, deep learning |
|
1. Introduction |
|
Emotional speech synthesis has been achieved through deep |
|
neural networks [1, 2, 3, 4]. However, most studies have trai ned |
|
models on a small number of speakers or balanced class distri - |
|
butions because it is challenging to guarantee speech quali ty for |
|
each speaker and emotion, given imbalanced data distributi ons |
|
with respect to speakers and emotions. In this paper, we pres ent |
|
a methodology for training our multi-speaker emotional tex t-to- |
|
speech (TTS) synthesizer capable of generating speech for a ll |
|
targeted speakers’ voices and emotions. The main methods ar e |
|
silence removal, curriculum learning [5], and oversamplin g [6]. |
|
The synthesized audios are demonstrated through a web page. |
|
2. Datasets |
|
4 datasets were used to train the multi-speaker emotional TT S |
|
synthesizer. The first dataset, the Korean single speaker sp eech |
|
(KSS) dataset [7], is publicly available and contains speec h |
|
samples of a single female speaker: kss-f. We labeled their |
|
emotion as neutral. The remaining 3 datasets consist of spee ch |
|
of the Ekman’s 7 basic emotions [8]: neutral, anger, disgust , |
|
fear, happiness, sadness, and surprise. |
|
The first Korean emotional TTS (KETTS) dataset consists |
|
of 1 female and 1 male speaker: ketts-30f and ketts-30m, whic h |
|
are abbreviations of a 30’s female and male in KETTS. The 2 |
|
speakers were assigned to different sets of sentences; howe ver, |
|
the same sentences were recorded across 7 emotions for a sing le |
|
speaker. In the female case, only happy speech samples have a |
|
different set of sentences. KETTS is balanced with respect t o |
|
speakers and emotions, except for the female’s happy speech |
|
subset (Table 1). |
|
The second Korean emotional TTS (KETTS2) dataset con- |
|
sists of 3 female and 3 male speakers, totally 6 speakers: ket t2- |
|
†work done at KAISTTable 1: Hours of preprocessed training datasets |
|
Speaker all neu ang dis fea hap sad sur |
|
kss-f 12.59 12.59 |
|
ketts-30f 26.61 3.52 3.46 3.51 3.68 5.13 3.75 3.56 |
|
ketts-30m 24.12 3.37 3.29 3.31 3.51 3.50 3.73 3.40 |
|
ketts2-20m 5.09 0.72 0.72 0.74 0.76 0.69 0.75 0.70 |
|
ketts2-30f 4.69 0.66 0.65 0.67 0.65 0.70 0.68 0.68 |
|
ketts2-40m 4.98 0.73 0.69 0.70 0.75 0.69 0.74 0.69 |
|
ketts2-50f 4.98 0.73 0.71 0.71 0.70 0.72 0.71 0.69 |
|
ketts2-50m 4.73 0.68 0.68 0.69 0.67 0.68 0.68 0.65 |
|
ketts2-60f 4.90 0.77 0.68 0.67 0.68 0.72 0.72 0.67 |
|
ketts3-f 9.64 3.96 1.34 1.27 1.44 1.64 |
|
ketts3-m 9.38 3.90 1.43 1.18 1.39 1.48 |
|
all 111.70 31.63 13.65 11.01 13.85 15.64 14.87 11.05 |
|
20m, ketts2-30f, ketts2-40m, ketts-50f, ketts2-50m, and k etts2- |
|
60f. The same sentences were recorded across 7 emotions and 6 |
|
speakers. Hence, KETTS2 is balanced with respect to speaker s |
|
and emotions (Table 1). |
|
The third Korean emotional TTS (KETTS3) dataset con- |
|
sists of 1 female and 1 male speaker: ketts3-f and ketts3-m. I t |
|
includes 5 emotions, excluding disgust and surprise. The sa me |
|
sentences were recorded across 2 speakers; however, differ ent |
|
sentences were spoken for the 5 emotions. KETTS3 is balanced |
|
for speakers but not for emotions. Therefore, the whole trai ning |
|
dataset is balanced for neither speakers nor emotions (Tabl e 1). |
|
3. Methodology |
|
3.1. Preprocessing |
|
The WebRTC voice activity detector, py-webrtcvad1, is utilized |
|
to remove unvoiced segments in audios, with its settings of a n |
|
aggressiveness level of 3, frame duration 30ms, and padding |
|
duration 150ms. These settings remove silences at the start , end, |
|
and middle of speech. However, the amount of silence removed |
|
does not distort emotional expression. All audios are resam pled |
|
to sampling rate 22,050Hz. Mel spectrograms are computed |
|
through a short-time Fourier transform (STFT) using frame s ize |
|
1024, hop size 256, window size 1024, and a Hann window |
|
function. The STFT magnitudes are transformed to the libros a |
|
Slaney mel scale using an 80-channel mel filterbank spanning |
|
0Hz to 8kHz, and the results are then clipped to a minimum |
|
value of10−5, followed by log dynamic range compression. |
|
Every Korean character in an input sentence is decomposed |
|
into 3 elements: an onset, nucleus, and coda. In total, 19 ons ets, |
|
21 nuclei, and 28 codas including the empty coda are employed |
|
as defined by Unicode. A sequence of these elements becomes |
|
a grapheme sequence taken as input by our synthesizer. |
|
1https://github.com/wiseman/py-webrtcvad3.2. Model |
|
Our multi-speaker emotional TTS synthesizer takes 3 inputs — |
|
the grapheme sequence of a Korean sentence, 1 of 10 speakers |
|
(5 females, 5 males), and 1 of the 7 Ekman’s emotion classes. I t |
|
then generates a waveform in which the speaker utters the inp ut |
|
sentence with the given emotion. Our synthesizer consists o f 2 |
|
sub-models: Tacotron 2 [9], mapping a grapheme sequence to |
|
a mel spectrogram, and WaveGlow [10], transforming the mel |
|
spectrogram to a waveform. Tacotron 2 is an auto-regressive |
|
sequence-to-sequence neural network with a location-sens itive |
|
attention mechanism. WaveGlow is a flow-based generative |
|
neural network without auto-regression. We adapted NVIDIA |
|
Tacotron 2 and WaveGlow repositories2,3to synthesize speech |
|
for multiple speakers and emotions. The WaveGlow model |
|
was utilized without modification but the Tacotron 2 model wa s |
|
modified as outlined in the following paragraph. |
|
Speaker identity is represented as a 5-dimensional train- |
|
able speaker vector . Emotion identity is represented as a 3- |
|
dimensional trainable emotion vector , except for the neutral |
|
emotion vector, which is a non-trainable zero vector. To syn - |
|
thesize speech of a given speaker and emotion, in the decoder |
|
of Tacotron 2, speaker and emotion vectors are concatenated to |
|
attention context vectors taken by the first and second LSTM |
|
layers and the linear layer estimating a mel spectrogram. |
|
3.3. Training |
|
Tacotron 2 was trained with a batch size of 64 equally dis- |
|
tributed to 4 GPUs. The Adam optimizer [11] of the default |
|
settings (β1= 0.9,β2= 0.999,ǫ= 10−6) was used with a |
|
learning rate of 10−3andL2regularization with weight 10−6. |
|
If the norm of gradients exceeded 1, their norm was normalize d |
|
to 1 to ensure stable learning. |
|
Using a curriculum learning [5] strategy, Tacotron 2 was |
|
trained to learn single-speaker neutral speech, multi-spe aker |
|
neutral speech, and multi-speaker emotional speech in this or- |
|
der. More specifically, the model was trained with the KSS |
|
dataset for 20,000 iterations, then additionally with all d atasets |
|
of neutral speech for 30,000 iterations, and finally with all train- |
|
ing datasets for 65,000 iterations. Transitioning to train ing on |
|
the next dataset was done when the model stably pronounced |
|
given whole sentences for all training speaker-emotion pai rs. |
|
In each training stage, we oversampled [6] the training set |
|
with respect to speaker-emotion pairs, which means samples of |
|
each speaker-emotion pair appear in a mini-batch with equal |
|
probability. For example, samples of (ketts-30f, neutral) and |
|
those of (ketts2-20m, happy) appear in a mini-batch with equ al |
|
probability. This helped overcome difficulty in learning to syn- |
|
thesize speech of speaker-emotion pairs with relatively sc arce |
|
samples. |
|
WaveGlow was trained with a batch size of 24, equally dis- |
|
tributed to 3 GPUs using 24 clips of 16,000 mel spectrogram |
|
frames randomly chosen from each training sample. Training |
|
samples shorter than 16,000 mel frames were excluded from th e |
|
training set since these samples padded with zeros caused un sta- |
|
ble learning such as exploding gradients. Similar to Tacotr on 2, |
|
we oversampled the training set with respect to speaker-emo tion |
|
pairs. The Adam optimizer was used with the default settings |
|
and learning rate 10−4. Weight normalization was applied, as |
|
described in the original paper [10]. To ensure stable learn ing, |
|
if the norm of gradients exceeded 1, their norm was normalize d |
|
2https://github.com/NVIDIA/tacotron2 |
|
3https://github.com/NVIDIA/waveglowto 1. The model was initialized with the pretrained weights4of- |
|
fered in the WaveGlow repository. The network was trained fo r |
|
400,000 iterations until its loss curve formed a plateau. Th ez |
|
elements were sampled from Gaussians with standard deviati on |
|
1 during training and 0.75 during inference. |
|
4. Results and Discussion |
|
Through this procedure, our speech synthesizer is able to sy n- |
|
thesize speech for all available 10 speakers and 7 emotions. Un- |
|
expectedly, disgusted and surprised expressions of the KET TS3 |
|
speakers can be synthesized even without training supervis ion. |
|
Synthesized speech samples can be found on this web page5. |
|
Although our model expresses speaker and emotion iden- |
|
tities, there are some minor inconsistencies in the quality of |
|
synthesized samples across speakers and emotions. Thus, in |
|
production, it is reasonable to fine-tune for each speaker an d |
|
respectively preserve the model parameters. |
|
Our silence removal settings substantially accelerated th e |
|
learning of Tacotron 2. This was probably because silence re - |
|
moval at the start, end, and middle of speech resulted in the |
|
linear relationship between text and speech, and this relat ion- |
|
ship helped the location-sensitive attention network easi ly learn |
|
text-to-speech alignments. |
|
5. Acknowledgements |
|
This work was supported by Ministry of Culture, Sports and |
|
Tourism andKorea Creative Content Agency [R2019020013, |
|
R2020040298]. |
|
6. References |
|
[1] Y . Lee, A. Rabiee, and S.-Y . Lee, “Emotional end-to-end n eural |
|
speech synthesizer,” ArXiv , vol. abs/1711.05447, 2017. |
|
[2] H. Choi, S. Park, J. Park, and M. Hahn, “Multi-speaker emo tional |
|
acoustic modeling for CNN-based speech synthesis,” in ICASSP , |
|
2019. |
|
[3] S.-Y . Um, S. Oh, K. Byun, I. Jang, C. Ahn, and H.-G. Kang, |
|
“Emotional speech synthesis with rich and granularized con trol,” |
|
inICASSP , 2020. |
|
[4] T.-H. Kim, S. Cho, S. Choi, S. Park, and S.-Y . Lee, “Emotio nal |
|
voice conversion using multitask learning with text-to-sp eech,” in |
|
ICASSP , 2020. |
|
[5] Y . Bengio, J. Louradour, R. Collobert, and J. Weston, “Cu rriculum |
|
learning,” in ICML , 2009. |
|
[6] M. Buda, A. Maki, and M. A. Mazurowski, “A systematic stud y |
|
of the class imbalance problem in convolutional neural netw orks,” |
|
Neural Networks , vol. 106, 2018. |
|
[7] K. Park, “KSS dataset: Korean single speaker speech data set,” |
|
https://kaggle.com/bryanpark/korean-single-speaker- speech-dataset, |
|
2018. |
|
[8] P. Ekman and D. Cordaro, “What is meant by calling emotion s |
|
basic,” Emotion Review , vol. 3, no. 4, 2011. |
|
[9] J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Ya ng, |
|
Z. Chen, Y . Zhang, Y . Wang, R.-S. Ryan, R. A. Saurous, |
|
Y . Agiomyrgiannakis, and Y . Wu, “Natural TTS synthesis by co n- |
|
ditioning wavenet on MEL spectrogram predictions,” in ICASSP , |
|
2018. |
|
[10] R. Prenger, R. Valle, and B. Catanzaro, “Waveglow: A flow -based |
|
generative network for speech synthesis,” in ICASSP , 2019. |
|
[11] D. P. Kingma and J. Ba, “Adam: A method for stochastic opt i- |
|
mization,” in ICLR , 2015. |
|
4“waveglow_256channels_universal_v5.pt” was used. |
|
5https://sungjae-cho.github.io/InterSpeech2021_STDem o/ |