DOI QR코드

DOI QR Code

A study on loss combination in time and frequency for effective speech enhancement based on complex-valued spectrum

효과적인 복소 스펙트럼 기반 음성 향상을 위한 시간과 주파수 영역 손실함수 조합에 관한 연구

  • 정재희 (인천대학교 컴퓨터공학부) ;
  • 김우일 (인천대학교 컴퓨터공학부)
  • Received : 2021.11.26
  • Accepted : 2022.01.10
  • Published : 2022.01.31

Abstract

Speech enhancement is performed to improve intelligibility and quality of the noise-corrupted speech. In this paper, speech enhancement performance was compared using different loss functions in time and frequency domains. This study proposes a combination of loss functions to utilize advantage of each domain by considering both the details of spectrum and the speech waveform. In our study, Scale Invariant-Source to Noise Ratio (SI-SNR) is used for the time domain loss function, and Mean Squared Error (MSE) is used for the frequency domain, which is calculated over the complex-valued spectrum and magnitude spectrum. The phase loss is obtained using the sin function. Speech enhancement result is evaluated using Source-to-Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI). In order to confirm the result of speech enhancement, resulting spectrograms are also compared. The experimental results over the TIMIT database show the highest performance when using combination of SI-SNR and magnitude loss functions.

잡음에 오염된 음성의 명료도와 음질을 향상시키고자 음성 향상을 수행한다. 본 연구에서는 복소값 스펙트럼을 이용한 마스크기반 음성 향상에서 시간 영역 손실함수와 주파수 영역 손실함수에 따른 학습 결과를 비교하였다. 시간 영역의 음성 파형과 주파수 영역의 스펙트럼의 세부정보를 고려해 두 영역의 장점을 활용할 수 있도록 손실함수 조합에 관해 연구를 진행하였다. 시간 영역 손실함수는 Scale Invariant-Source to Noise Ratio(SI-SNR)을 이용해 계산하고, 주파수 영역 손실함수는 복소값 스펙트럼과 크기 스펙트럼을 Mean Squared Error(MSE)로 계산하여 사용하였고, sin 함수를 이용해 위상에 대한 손실함수를 계산하였다. 손실함수 조합은 시간 영역 손실함수인 SI-SNR과 각 주파수 영역 손실함수를 조합하였다. 또한 크기 값과 위상 값을 모두 고려할 수 있도록 SI-SNR과 크기 스펙트럼, 위상에 관련된 손실함수들도 조합하여 실험을 진행하였다. 음성 향상 결과는 Source-to-Distortion Ratio(SDR), Perceptual Evaluation of Speech Quality(PESQ), Short-Time Objective Intelligibility(STOI)를이용해 성능 비교 평가를 진행하였다. 음성 향상 결과를 확인해보기 위해 스펙트럼 상에서 비교를 진행하였다. TIMIT 데이터베이스를 이용한 실험 결과, 시간 영역 또는 주파수 영역 손실함수보다 SI-SNR과 크기 스펙트럼을 조합한 손실함수를 사용하여 음성 향상을 학습했을 때 가장 높은 성능을 보였다.

Keywords

Acknowledgement

본 논문은 정부(과학기술정보통신부)의 재원으로 한국연구재단의 지원을 받아 수행된 연구임(No. 2019R1F1A106299513).

References

  1. J. Lim and A. Oppenheim, "All-pole modeling of degraded speech," IEEE Trans. on Acoustics, Speech, and Signal Process. 26, 197-210 (1978). https://doi.org/10.1109/TASSP.1978.1163086
  2. R. Martin, "Spectral subtraction based on minimum statistics," power 6.8 (1994).
  3. Y. H. Tu, J. Du, and C. H. Lee, "2d-to-2d mask estimation for speech enhancement based on fully convolutional neural network," Proc. IEEE ICASSP. 6664-6668 (2020).
  4. Y. Xu, J. Du, and C. H. Lee, "A regression approach to speech enhancement based on deep neural networks," IEEE/ACM Trans. on Audio, Speech, and Lang. Process. 23, 7-19 (2014).
  5. D. L. Wang and J. Chen, "Supervised speech separation based on deep learning: An overview," IEEE/ACM Trans. on Audio, Speech, and Lang. Process. 26, 1702-1726 (2018). https://doi.org/10.1109/taslp.2018.2842159
  6. Z. Xu, S. Elshamy, and T. Fingscheidt, "Using separate losses for speech and noise in mask-based speech enhancement," Proc. IEEE ICASSP. 7519-7523 (2020).
  7. K. Paliwal, K. Wojcicki, and B. Shannon, "The importance of phase in speech enhancement," speech communication, 53, 465-494 (2011). https://doi.org/10.1016/j.specom.2010.12.003
  8. Y. Wang and D. L. Wang, "A deep neural network for time-domain signal reconstruction," Proc. IEEE ICASSP. 4390-4394 (2015).
  9. Y. Hu, Y. Liu, S. Lv, M. Xing, S. Zhang, Y. Fu, J. Wu, B. Zhang, and L. Xie, "DCCRN: Deep complex convolution recurrent network for phase-aware speech enhancement," arXiv preprint arXiv:2008.00264 (2020).
  10. H. S. Choi, J. H. Kim, J. Huh, A. Kim, J. W. Ha, and K. Lee, "Phase-aware speech enhancement with deep complex u-net," Proc. ICLR. 1-20 (2019).
  11. O. Oktay, J. Schlemper, L. Le. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, B. Glocker, and D. Rueckert, "Attention u-net: Learning where to look for the pancreas," arXiv preprint arXiv:1804.03999 (2018).
  12. Y. Luo and N. Mesgarani, "Conv-tasnet: Surpassing ideal time-frequency magnitude masking for speech separation," IEEE/ACM Trans. on Audio, Speech, and Lang. Process. 27, 1256-1266 (2019). https://doi.org/10.1109/taslp.2019.2915167
  13. J. Zhang, M. D. Plumbley, and W. Wang, "Weighted magnitude-phase loss for speech dereverberation," Proc. IEEE ICASSP. 5794-5798 (2021).
  14. C. Trabelsi, O. Bilaniuk, Y. Zhang, D. Serdyuk, S. Subramanian, J. F. Santos, S. Mehri, N. Rostamzadeh, Y. Bengio, and C. J. Pal, "Deep complex networks," arXiv preprint arXiv:1705.09792 (2017).
  15. J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, and D. S. Pallett, "DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1," NASA STI/Recon Tech. Rep., n 93: 27403, 1993.
  16. A. Varga, "The NOISEX-92 study on the effect of additive noise on automatic speech recognition," ical Report, DRA Speech Research Unit, CiNii (1992).
  17. E. Vincent, R. Gribonval, and C. Fevotte, "Performance measurement in blind audio source separation," IEEE Trans. on Audio, Speech, and Lang. Process. 14, 1462-1469 (2006). https://doi.org/10.1109/TSA.2005.858005
  18. A. W. Rix, J. G. Beerends, M. P. Hollier, and A. P. Hekstra, "Perceptual evaluation of speech quality (PE SQ)-a new method for speech quality assessment of telephone networks and codecs," Proc. IEEE ICASSP. 749-752 (2001).
  19. C. H. Taal, R. C. Hendriks, and R. Heusdens, "A short-time objective intelligibility measure for time-frequency weighted noisy speech," Proc. IEEE ICASSP. 4214-4217 (2010).