DOI QR코드

DOI QR Code

Performance Comparison of State-of-the-Art Vocoder Technology Based on Deep Learning in a Korean TTS System

한국어 TTS 시스템에서 딥러닝 기반 최첨단 보코더 기술 성능 비교

  • Kwon, Chul Hong (Dept. Information, Communication, Electronics Engineering, Daejeon Univ)
  • 권철홍 (대전대학교 정보통신.전자공학과)
  • Received : 2020.02.15
  • Accepted : 2020.03.12
  • Published : 2020.05.31

Abstract

The conventional TTS system consists of several modules, including text preprocessing, parsing analysis, grapheme-to-phoneme conversion, boundary analysis, prosody control, acoustic feature generation by acoustic model, and synthesized speech generation. But TTS system with deep learning is composed of Text2Mel process that generates spectrogram from text, and vocoder that synthesizes speech signals from spectrogram. In this paper, for the optimal Korean TTS system construction we apply Tacotron2 to Tex2Mel process, and as a vocoder we introduce the methods such as WaveNet, WaveRNN, and WaveGlow, and implement them to verify and compare their performance. Experimental results show that WaveNet has the highest MOS and the trained model is hundreds of megabytes in size, but the synthesis time is about 50 times the real time. WaveRNN shows MOS performance similar to that of WaveNet and the model size is several tens of megabytes, but this method also cannot be processed in real time. WaveGlow can handle real-time processing, but the model is several GB in size and MOS is the worst of the three vocoders. From the results of this study, the reference criteria for selecting the appropriate method according to the hardware environment in the field of applying the TTS system are presented in this paper.

기존의 TTS 시스템은 텍스트 전처리, 구문 분석, 발음표기 변환, 경계 분석, 운율 조절, 음향 모델에 의한 음향 특징 생성, 합성음 생성 등 여러 모듈로 구성되어 있다. 그러나 딥러닝 기반 TTS 시스템은 텍스트에서 스펙트로그램을 생성하는 Text2Mel 과정과 스펙트로그램에서 음성신호을 합성하는 보코더로 구성된다. 본 논문에서는 최적의 한국어 TTS 시스템 구성을 위해 Tex2Mel 과정에는 Tacotron2를 적용하고, 보코더로는 WaveNet, WaveRNN, WaveGlow를 소개하고 이를 구현하여 성능을 비교 검증한다. 실험 결과, WaveNet은 MOS가 가장 높으며 학습 모델 크기가 수백 MB이고 합성시간이 실시간의 50배 정도라는 결과가 나왔다. WaveRNN은 WaveNet과 유사한 MOS 성능을 보여주며 모델 크기가 수십 MB 단위이고 실시간 처리는 어렵다는 결과가 도출됐다. WaveGlow는 실시간 처리가 가능한 방법이며 모델 크기가 수 GB이고 MOS가 세 방식 중에서 가장 떨어진다는 결과를 보여주었다. 본 논문에서는 이러한 연구 결과로부터 TTS 시스템을 적용하는 분야의 하드웨어 환경에 맞춰 적합한 방식을 선정할 수 있는 참고 기준을 제시한다.

Keywords

References

  1. Robert M. Gray, "A history of realtime digital speech on packet networks: part II of linear predictive coding and the internet protocol", Foundations and Trends in Signal Processing, Vol. 3, No. 4, pp. 203-303, 2010. https://doi.org/10.1561/2000000036
  2. D. H. Klatt and L. C. Klatt, "Analysis, synthesis and perception of voice quality variation among female and male talkers", Journal of Acoustical Society of America, Vol. 83, pp. 820-857, 1990. https://doi.org/10.1121/1.396127
  3. Masanori Morise, Fumiya Yokomori, Kenji Ozawa, "WORLD: A vocoder-based high-quality speech synthesis system for real-time applications", IEICE Trans. on Information and Systems, Vol. 99, No. 7, pp. 1,877-1,884, 2016. https://doi.org/10.1587/transinf.2015edp7457
  4. Hideki Kawahara, Ikuyo Masuda-Katsuse, Alain de Cheveigne, "Restructuring speech representations using a pitch-adaptive timefrequency smoothing and an instantaneousfrequency based F0 extraction", Speech Communication, Vol. 27, pp. 187-207, 1999. https://doi.org/10.1016/S0167-6393(98)00085-5
  5. Xin Wang, Jaime Lorenzo-Trueba, Shinji Takaki, Lauri Juvela, Junichi Yamagishi, "A comparison of recent waveform generation and acoustic modeling methods for neural network based speech synthesis", Proc. International Conference on Acoustics, Speech, and Signal Processing, pp. 4,804-4,808, 2018.
  6. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu, "WaveNet: A generative model for raw audio," arXiv preprint https://arxiv.org/pdf/1609.03499.pdf, 2016 Sep.
  7. Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron van den Oord, Sander Dieleman, Koray Kavukcuoglu. "Efficient neural audio synthesis", arXiv preprint. https://arxiv.org/pdf/1802.08435.pdf, 2018, Feb.
  8. Ryan Prenger, Rafael Valle, Bryan Catanzaro, "WaveGlow: A flow-based generative network for speech synthesis", arXiv preprint. https://arxiv.org/pdf/1811.00002.pdf, 2018 Nov.
  9. J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. Skerry-Ryan, Rif A. Saurous, Y. Agiomyrgiannakis, Y. Wu, "Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions", arXiv preprint https://arxiv.org/pdf/1712.05884.pdf, 2017 Dec.
  10. Wei Ping, Kainan Peng, Kexin Zhao, Zhao Song, "WaveFlow: A compact flow-based model for raw audio", arXiv preprint. https://arxiv.org/pdf/1912.01219.pdf, 2019, Dec.
  11. J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, Y. Bengio, "Attention-based models for speech recognition", Proc. Neural Information Processing Systems, pp. 577-585, 2015.
  12. Ryuichi Yamamoto, Eunwoo Song, Jae-Min Kim, "Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram", arXiv preprint. https://arxiv.org/pdf/1910.11480.pdf, 2019 Oct.
  13. K. Kumar, R. Kumar, T. de Boissiere, L. Gestin, W. Z. Teoh, J. Sotelo, A. de Brebisson, Y. Bengio, A. Courville, "MelGAN: Generative adversarial networks for conditional waveform synthesis", Proc. Neural Information Processing Systems(NeurIPS 2019), poster, 2019 Dec.