DOI QR코드

DOI QR Code

Deep Learning Music genre automatic classification voting system using Softmax

소프트맥스를 이용한 딥러닝 음악장르 자동구분 투표 시스템

  • Bae, June (Department of Computer Science, The University of Suwon) ;
  • Kim, Jangyoung (Department of Computer Science, The University of Suwon)
  • Received : 2018.10.30
  • Accepted : 2018.11.20
  • Published : 2019.01.31

Abstract

Research that implements the classification process through Deep Learning algorithm, one of the outstanding human abilities, includes a unimodal model, a multi-modal model, and a multi-modal method using music videos. In this study, the results were better by suggesting a system to analyze each song's spectrum into short samples and vote for the results. Among Deep Learning algorithms, CNN showed superior performance in the category of music genre compared to RNN, and improved performance when CNN and RNN were applied together. The system of voting for each CNN result by Deep Learning a short sample of music showed better results than the previous model and the model with Softmax layer added to the model performed best. The need for the explosive growth of digital media and the automatic classification of music genres in numerous streaming services is increasing. Future research will need to reduce the proportion of undifferentiated songs and develop algorithms for the last category classification of undivided songs.

인간이 가진 뛰어난 능력 중의 하나인 곡 분류 과정을 딥러닝 알고리즘을 통해 구현하는 연구는 단일데이터를 이용한 유니모달 모델, 멀티모달 모델, 뮤직비디오를 이용한 멀티모달 방식 등이 있다. 이 연구에서는 곡의 스펙트로그램을 짧은 샘플들로 분할하여 각각을 CNN으로 분석한 뒤 그 결과를 투표하는 시스템을 제안하여 더 좋은 결과를 얻었다. 딥러닝 알고리즘 중 CNN이 RNN에 비해 음악 장르 구분에 있어 우수한 성능을 보였으며 CNN과 RNN을 같이 적용했을 때 성능이 좋아짐을 알 수 있었다. 음악샘플을 나누어 각각의 CNN 결과를 투표하는 시스템이 이전 모델에 비해 좋은 결과를 나타내었고 이 모델에 Softmax 레이어를 추가한 모델이 가장 좋은 성능을 보였다. 디지털 미디어의 폭발적인 성장과 수많은 스트리밍 서비스 속에서 음악장르의 자동분류에 대한 필요는 점점 증가하고 있는 추세이다. 향후 연구에서는 미분류 곡의 비율을 낮추고 최종적으로 미분류된 곡들의 장르구분에 대한 알고리즘을 개발할 필요가 있을 것이다.

Keywords

HOJBC0_2019_v23n1_27_f0001.png 이미지

Fig. 1 Deep Learning Music genre automatic classification voting system flow chart (proposed model overview)

HOJBC0_2019_v23n1_27_f0002.png 이미지

Fig. 2 Spectrogram of a song (X:Time,Y:Frequency)

HOJBC0_2019_v23n1_27_f0003.png 이미지

Fig. 3 Divided Spectrogram

HOJBC0_2019_v23n1_27_f0004.png 이미지

Fig. 4 CNN Structure [11]

HOJBC0_2019_v23n1_27_f0005.png 이미지

Fig. 5 music classification voting system

HOJBC0_2019_v23n1_27_f0006.png 이미지

Fig. 6 music classification softmax system

HOJBC0_2019_v23n1_27_f0007.png 이미지

Fig. 7 Truncation of music classification

Table. 1 Classification Confidence Rate Comparison

HOJBC0_2019_v23n1_27_t0001.png 이미지

References

  1. S. Kim, D. Kim, and B. Suh, "Music Genre Classification using Multimodal Deep Learning," International Journal of Information and Communication Engineering, vol. 9, no. 4, pp. 358-362, Aug. 2011.
  2. Potla Revathi, "Analytical Hierarchy Process in Fuzzy Comprehensive Evaluation Method," Asia-pacific Journal of Convergent Research Interchange, vol.1, no.3, pp. 41-52, September 2015.
  3. B. Macfee, "Learning Content Similarity for Music Recommendation," Journal of latex class files, vol. 6, no. 1, pp. 1-2, Jan. 2017.
  4. D Cabrera, "A Computer Program for Psycho-acoustical Analysis," Australian Acoustical Society Conference, vol. 24, no. 1, pp. 47-54, Mar. 2014
  5. J. C. Na, "Optimization in Cooperative Spectrum Sensing," Asia-pacific Journal of Convergent Research Interchange, vol. 3, no. 1, pp. 19-31, March 2017.
  6. D. J. Kim, and P. L. Manjusha, "Building Detection in High Resolution Remotely Sensed Images based on Automatic Histogram-Based Fuzzy C-Means Algorithm," Asia-pacific Journal of Convergent Research Interchange, vol. 3, no. 1, pp. 57-62, March 2017. https://doi.org/10.21742/apjcri.2017.12.11
  7. T. S. Slininger, Y. Xu, and R. D. Lorenz. "Enhancing estimation accuracy by applying cross- correlation image tracking to self-sensing including evaluation on a low saliency ratio machine," Energy Conversion Congress and Expositionvol, vol. 22, no. 5, pp. 23-28, May 2016.
  8. L. Maaten, and G. Hinton, "Learning Content Similarity for Music Recommendation Visualizing Data using T-SNE," Journal of Machine Learning Research, vol. 9, no. 1, pp. 2579-2605, Nov. 2008.
  9. J. Bae, and J. Kim, "Engine Sound Design for Electric Vehicle by using Software Synthesizer," Journal of the Korea Institute of Information and Communication Engineering, vol. 21, no. 8, pp 1547-1552, Aug. 2017. https://doi.org/10.6109/JKIICE.2017.21.8.1547
  10. V. K. Rao, R. Caytiles, "Subgraph with Set Similarity in a Database," Asia-pacific Journal of Convergent Research Interchange, vol. 3, no. 2, pp. 29-37, Jun. 2017. https://doi.org/10.21742/apjcri.2017.03.03
  11. Aphex34, Own work, CC BY-SA 4.0 [Internet]. Available: https://commons.wikimedia.org/w/index.php?curid=45679374.
  12. B. Han, S. Rho, S. Jun, and E. Hwang, "Music emotion classification and context-based music ecommendation," Multimedia Tools Application, vol. 47, no. 3, pp. 433-460, May 2010. https://doi.org/10.1007/s11042-009-0332-6
  13. J. Bae, J. Kim, and Y. Yang, "Physical modeling synthesizing of 25 strings Gayageum using white noise as exciter," Journal of the Korea Institute of formation and Communication Engineering, vol. 22, no. 5, pp. 740-746, May 2018.