DOI QR코드

DOI QR Code

기계학습을 이용한 한국어 대화시스템 도메인 분류

Machine Learning Based Domain Classification for Korean Dialog System

  • 정영섭 (순천향대학교 빅데이터공학과)
  • 투고 : 2019.07.02
  • 심사 : 2019.08.20
  • 발행 : 2019.08.28

초록

대화시스템은 인간과 컴퓨터의 상호작용에 새로운 패러다임이 되고 있다. 자연어로써 상호작용함으로써 인간은 보다 자연스럽고 편리하게 각종 서비스를 누릴 수 있게 되었다. 대화시스템의 구조는 일반적으로 음성 인식, 자연어 이해, 문맥 파악 등의 여러 모듈의 파이프라인으로 이뤄지는데, 본 연구에서는 자연어 이해 모듈의 도메인 분류 문제를 풀기 위해 convolutional neural network, random forest 등의 기계학습 모델을 비교하였다. 사람이 직접 태깅한 총 7개 서비스 도메인 데이터에 대하여 각 문장의 도메인을 분류하는 실험을 수행하였고 random forest 모델이 F1 score 0.97 이상으로 가장 높은 성능을 달성한 것을 보였다. 향후 다른 기계학습 모델들을 추가 실험함으로써 도메인 분류 성능 개선을 지속할 계획이다.

Dialog system is becoming a new dominant interaction way between human and computer. It allows people to be provided with various services through natural language. The dialog system has a common structure of a pipeline consisting of several modules (e.g., speech recognition, natural language understanding, and dialog management). In this paper, we tackle a task of domain classification for the natural language understanding module by employing machine learning models such as convolutional neural network and random forest. For our dataset of seven service domains, we showed that the random forest model achieved the best performance (F1 score 0.97). As a future work, we will keep finding a better approach for domain classification by investigating other machine learning models.

키워드

JKOHBZ_2019_v9n8_1_f0001.png 이미지

Fig. 1. Pipeline process of dialog system

JKOHBZ_2019_v9n8_1_f0002.png 이미지

Fig. 2. Distribution of the sentence length (i.e., the number of tokens in sentences)

JKOHBZ_2019_v9n8_1_f0003.png 이미지

Fig. 3. CNN structure for domain classification

JKOHBZ_2019_v9n8_1_f0004.png 이미지

Fig. 4. Random Forest

JKOHBZ_2019_v9n8_1_f0005.png 이미지

Fig. 5. Precision of comparable models

JKOHBZ_2019_v9n8_1_f0006.png 이미지

Fig. 6. Recall of comparable models

JKOHBZ_2019_v9n8_1_f0007.png 이미지

Fig. 7. F1 score of comparable models

Table 1. Data samples used for experiments

JKOHBZ_2019_v9n8_1_t0001.png 이미지

Table 2. Data statistics

JKOHBZ_2019_v9n8_1_t0002.png 이미지

Table 3. Weighted performance of comparable models

JKOHBZ_2019_v9n8_1_t0003.png 이미지

참고문헌

  1. Amazon Alexa. https://developer.amazon.com/alexa
  2. Naver Clova. https://clova.ai/ko
  3. Samsung Bixby, https://www.samsung.com/sec/apps/bixby/
  4. Y. S. Jeong. (2018). Out-Of-Domain Detection Using Hierarchical Dirichlet Process. Journal of The Korea Society of Computer and Information, 23(1), 17-24. https://doi.org/10.9708/JKSCI.2018.23.01.017
  5. W. S. McCulloch & W. Pitts. (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. The bulletin of mathematical biophysic, 5(4), 115-133. https://doi.org/10.1007/BF02478259
  6. A. Krizhevsky., I. Sutskever. & G. E. Hinton. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Proceedings of the 25th International Conference on Neural Information Processing Systems. (pp. 1097-1105).
  7. K. Simonyan. & Andrew Zisserman. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. Proceedings of 3rd International Conference on Learning Representations. (pp. 1-14).
  8. K. He., X. Zhang., S. Ren. & J. Sun. (2016). Deep Residual Learning for Image Recognition. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. (pp. 770-778).
  9. C. Szegedy., W. Liu., Y. Jia., P. Sermanet., S. Reed., D. Anguelov., D. Erhan., V. Vanhoucke. & A. Rabinovich. (2015). Going Deeper with Convolutions. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. (pp. 1-9).
  10. Y. Kim. (2014). Convolutional Neural Networks for Sentence Classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. (pp. 1746-1751).
  11. H. Kim. & Y. S. Jeong. (2019). Sentiment Classification Using Convolutional Neural Networks. Applied Science, 9(11), 1-14.
  12. S. Baker., A. Korhonen. & S. Pyysalo. (2016). Cancer Hallmark Text Classification Using Convolutional Neural Networks. Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining. (pp. 1-9).
  13. S. Lai., L. Xu., K. Liu. & J. Zhao. (2015). Recurrent Convolutional Neural Networks for Text Classification. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. (pp. 2267-2273).
  14. A. Jacovi., O. S. Shalom. & Y. Goldberg. (2018). Understanding Convolutional Neural Networks for Text Classification, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. (pp. 56-65).
  15. L. Breiman. (2001). Random Forests. Machine Learning, 45(1), 5-32. https://doi.org/10.1023/A:1010933404324
  16. J. R. Quilan. (1986). Induction of Decision Trees. Machine Learning, 1(1), 81-106. https://doi.org/10.1007/BF00116251
  17. B. Xu., X. Guo., Y. Ye. & J. Cheng. (2012). An Improved Random Forest Classifier for Text Categorization. Journal of Computers, 7(12), 2913-2920.
  18. A. Bouaziz., C. Dartigues-Pallez., C. da C. Pereira., F. Precioso. & P. Lloret. (2014) Short Text Classification Using Semantic Random Forest. Proceedings of International Conference on Data Warehousing and Knowledge Discovery. (pp. 288-299).
  19. N. Srivastava., G. Hinton., A. Krizhevsky., I. Sutskever. & R. Salakhutdinov. (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15, 1929-1958.
  20. X. Glorot. & Y. Bengio. (2010). Understanding the Difficulty of Training Deep Feedforward Neural Networks. Proceedings of the 13th International Conference on Artificial Intelligence and Statistics. (pp. 249-256).
  21. D. P. Kingma. & J. L. Ba. (2015). Adam: A Method for Stochastic Optimization. Proceedings of the 3rd International Conference on Learning Representations. (pp. 1-15).