DOI QR코드

DOI QR Code

Detection of Anomaly VMS Messages Using Bi-Directional GPT Networks

양방향 GPT 네트워크를 이용한 VMS 메시지 이상 탐지

  • Choi, Hyo Rim (Dept. of Electrical and Electronics Engineering., Kangwon National Univ.) ;
  • Park, Seungyoung (Dept. of Electrical and Electronics Engineering., Kangwon National Univ.)
  • 최효림 (강원대학교 전기전자공학과) ;
  • 박승영 (강원대학교 전기전자공학과)
  • Received : 2022.06.15
  • Accepted : 2022.08.18
  • Published : 2022.08.31

Abstract

When a variable message signs (VMS) system displays false information related to traffic safety caused by malicious attacks, it could pose a serious risk to drivers. If the normal message patterns displayed on the VMS system are learned, it would be possible to detect and respond to the anomalous messages quickly. This paper proposes a method for detecting anomalous messages by learning the normal patterns of messages using a bi-directional generative pre-trained transformer (GPT) network. In particular, the proposed method was trained using the normal messages and their system parameters to minimize the corresponding negative log-likelihood (NLL) values. After adequate training, the proposed method could detect an anomalous message when its NLL value was larger than a pre-specified threshold value. The experiment results showed that the proposed method could detect malicious messages and cases when the system error occurs.

VMS (variable message signs) 시스템이 악의적인 공격에 노출되어 교통안전과 관련된 거짓 정보를 출력하게 된다면 운전자에게 심각한 위험을 초래할 수 있다. 이러한 경우를 방지하기 위해 VMS 시스템에 사용되는 메시지들을 수집하여 평상시의 패턴을 학습한다면 VMS 시스템에 출력될 수 있는 이상 메시지를 빠르게 감지하고 이에 대한 대응을 할 수 있을 것이다. 본 논문에서는 양방향 GPT (generative pre-trained transformer) 모델을 이용하여 VMS 메시지의 평상 시 패턴을 학습한 후 이상 메시지를 탐지하는 기법을 제안한다. 구체적으로, 제안된 기법에 VMS 메시지 및 시스템 파라미터를 입력 하고 이에 대한 NLL (negative log likelihood) 값을 최소화하도록 학습한다. 학습이 완료되면 판정해야 할 대상의 NLL 값을 계산한 후, 문턱치 값 이상일 경우 이를 이상으로 판정한다. 실험 결과를 통해, 공격에 의한 악의적인 메시지 탐지뿐만 아니라 시스템의 오류가 발생하는 상황에 대한 탐지도 가능함을 보였다.

Keywords

Acknowledgement

본 결과물은 농림축산식품부의 재원으로 농림식품기술기획평가원의 스마트팜다부처패키기혁신기술개발사업 (421040-04)과 아우토크립트(주)의 지원을 받아 연구되었음

References

  1. Ba, J. L., Kiros, J. R. and Hinton, G. E.(2016), Layer normalization [Online], Available at https://arxiv.org/abs/1607.06450
  2. Bena, B. and Kalita, J.(2020), Introducing aspects of creativity in automatic poetry generation [Online], Available at https://arxiv.org/abs/2002.02511
  3. CNS-LINK, https://new-m2m.tistory.com/21, 2022.08.04.
  4. He, K., Zhang, X., Ren, S. and Sun, J.(2016), Identity mappings in deep residual networks [Online], Available at https://arxiv.org/abs/1603.05027
  5. Hendrycks, D. and Gimpel, K.(2016), Gaussian error linear units (GELUs) [Online], Available at https://arxiv.org/abs/1606.08415
  6. Horn, R. A. and Johnson, C. R.(2012), Matrix Analysis (2nd ed.), Cambridge, UK, Cambridge University Press.
  7. Kelarestaghi, K. B.(2019), A risk based approach to intelligent transportation systems security, Doctoral Dissertation, Virginia Polytechnic Institute and State University.
  8. Kelarestaghi, K. B., Heaslip, K., Khalilikhah, M., Fuentes, A. and Fessmann, V.(2018), "Intelligent transportation system security: Hacked message signs", Society of Automotive Engineers International Journal of Transportation Cybersecurity and Privacy, vol. 1, no. 2, pp.1-15.
  9. Korean Broadcasting System, https://news.kbs.co.kr/news/view.do?ncd=5329562, 2022.05.12.
  10. Mai, K. T., Davies, T. and Griffin, L. D.(2022), Self-supervised losses for one-class textual anomaly detection [Online], Available at https://arxiv.org/abs/2204.05695.
  11. Manolache, A., Brad, B. and Burceanu, E.(2021), DATE: Detecting anomalies in text via self-supervision of transformers [Online], Available at https://arxiv.org/abs/2104.05591
  12. Mohaghegh, M. and Abdurakhmanov, A.(2021), "Anomaly detection in text data sets using character-level representation", Proc. International Conference on Machine Vision and Information Technology, Auckland, New Zealand, pp.1-6.
  13. Nam, M., Park, S. and Kim, D.(2021), "Intrusion detection method using bi-directional GPT for in-vehicle controller area networks", IEEE Access, vol. 9, pp.124931-124944. https://doi.org/10.1109/ACCESS.2021.3110524
  14. National Transport Information Center, https://openapi.its.go.kr:8090, 2022.04.25.
  15. Nuspire(2021), Nuspire Threat Report Q1 2021 [Online], Available at https://www.nuspire.com/resources/q1-2021-threat-report
  16. Otter, D. W., Medina, J. R. and Kalita, J. K.(2021), "A Survey of the usages of deep learning for natural language processing", IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 2, pp.604-624. https://doi.org/10.1109/TNNLS.2020.2979670
  17. Park, S., Kim, M. and Lee, S.(2018), "Anomaly detection for HTTP using convolutional autoencoders", IEEE Access, vol. 6, pp.70884-70901. https://doi.org/10.1109/ACCESS.2018.2881003
  18. Radford, A., Narasimhan, K., Salimans, T. and Sutskever, I.(2018), Improving language understanding by generative pre-training [Online], Available at https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf.
  19. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D. and Sutskever, I.(2019), Language models are unsupervised multitask learners [Online], Available at http://www.persagen.com/files/misc/radford2019language.pdf.
  20. Ruff, L., Zemlyanskiy, Y., Vandermeulen, R., Schnake, T. and Kloft, M.(2019), "Self-attentive, multi-context one-class classification for unsupervised anomaly detection on text", Proc. Annual Meetings of the Association for Computational Linguistics, Florence, Italy, pp.4061-4071.
  21. Schuster, M. and Paliwal, K. K.(1997), "Bi-directional recurrent neural networks", IEEE Transactions on Signal Processing, vol. 45, no. 11, pp.2673-2681. https://doi.org/10.1109/78.650093
  22. The Gainesville Sun, http://www.gainesville.com/article/20091002/articles/910021006, 2022.06.04.
  23. Vajjala, S., Majumder, B., Gupta, A. and Surana, H.(2020), Practical natural language processing: A Comprehensive guide to building real-world NLP systems, O'ReillyMedia, USA.
  24. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. and Polosukhin, I.(2017), "Attention is all you need", Proc. International Conference on Neural Information Processing Systems, Long Beach, CA, USA, pp.6000-6010.
  25. Wired, https://www.wired.com/2009/02/austin-road-sig, 2022.06.04.
  26. Yin, W., Kann, K., Yu, M. and Schutze, H.(2017), Comparative study of CNN and RNN for natural language processing [Online], Available at https://arxiv.org/abs/1702.01923
  27. Zaheer, M., Guruganesh, G., Dubey, A., Ainslie, J., Alberti, C., Ontanon, S., Pham, P., Ravula, A., Wang, Q., Yang, L. and Ahmed, A.(2021), Big Bird: Transformers for Longer Sequences [Online], Available at https://arxiv.org/abs/2007.14062