DOI QR코드

DOI QR Code

Adversarial-Mixup: Increasing Robustness to Out-of-Distribution Data and Reliability of Inference

적대적 데이터 혼합: 분포 외 데이터에 대한 강건성과 추론 결과에 대한 신뢰성 향상 방법

  • Received : 2020.11.26
  • Accepted : 2021.01.13
  • Published : 2021.02.28

Abstract

Detecting Out-of-Distribution (OOD) data is fundamentally required when Deep Neural Network (DNN) is applied to real-world AI such as autonomous driving. However, modern DNNs are quite vulnerable to the over-confidence problem even if the test data are far away from the trained data distribution. To solve the problem, this paper proposes a novel Adversarial-Mixup training method to let the DNN model be more robust by detecting OOD data effectively. Experimental results show that the proposed Adversarial-Mixup method improves the overall performance of OOD detection by 78% comparing with the State-of-the-Art methods. Furthermore, we show that the proposed method can alleviate the over-confidence problem by reducing the confidence score of OOD data than the previous methods, resulting in more reliable and robust DNNs.

Keywords

Acknowledgement

이 연구는 2020년도 정부 (과학기술정보통신부)의 재원으로 한국연구재단의 지원과 2020년도 정부 (산업통상자원부)의 재원으로 한국산업기술진흥원의 지원을 받아 수행된 연구임 (NRF-2020R1A2C1014768 중견 연구자지원사업, P0012724, 산업혁신인재성장지원사업)

References

  1. I.J. Goodfellow, J. Shlens, C. Szegedy, "Explaining and Harnessing Adversarial Examples," International Conference on Learning Representations(ICLR), 2015.
  2. A. Nguyen, J. Yosinski, J. Clune, "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images," Computer Vision and Pattern Recognition(CVPR), pp. 427-436, 2015.
  3. H. Zhang, M. Cisse, Y.N. Dauphin, D. Lopez-Paz, "Mixup: Beyond Empirical Risk Minimization," International Conference on Learning Representations(ICLR), 2018.
  4. K. Lee, H. Lee, K. Lee, J. Shin, "Training Confidence-Calibrated Classifiers for Detecting Out-of-Distribution Samples," International Conference on Learning Representations(ICLR), 2018.
  5. D. Hendrycks, M. Mazeika, T. Dietterich, "Deep Anomaly Detection with Outlier Expousre," International Conference on Learning Representations(ICLR), 2019.
  6. S. Hawkins, H. He, G. Williams, R. Baxter, "Outlier Detection Using Replicator Neural Networks," International Conference on Data Warehousing and Knowledge Discovery, Springer, pp. 170-180, 2002.
  7. L. Ruff, R.A. Vandermeulen, N. Gornitz, L. Deecke, S.A. Siddiqui, A. Binder, E, Müller, M. Kloft, "Deep One-Class Classification," Proceedings of the 35th International Conference on Machine Learning(ICML), pp. 4393-4402, 2018.
  8. J. Ren, P.J. Liu, E. Fertig, J. Snoek, R. Poplin, M. Depristo, J. Dillon, B. Lakshminarayanan, "Likelihood Ratios for Out-of-Distribution Detection," 33rd Conference on Neural Information Processing Systems(NeurlIPS), pp. 14707-14718, 2019.
  9. D. Hendrycks, K. Gimpel, "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks," International Conference on Learning Representations(ICLR), 2017.
  10. S. Liang, Y. Li, R. Srikant, "Enhancing The Reliability of Out-of-Distribution Image Detection in Neural Networks," International Conference on Learning Representations(ICLR), 2018.
  11. C. Guo, G. Pleiss, Y. Sun, K.Q. Weinberger, "On Calibration of Modern Neural Networks," Proceedings of the 34th International Conference on Machine Learning(ICML), 2017.
  12. K. Gwon, J. Yoo, "Out-of-Distribution Data Detection Using Mahalanobis Distance for Reliable Deep Neural Networks," Proceedings of 2020 IEMEK Symposium on Embedded Technology(ISET 2020), 2020 (in Korean).
  13. K. Lee, K. Lee, H. Lee, J. Shin, "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks," 32nd Conference on Neural Information Processing Systems(NeurlIPS), pp. 7167-7177, 2018.
  14. S. Thulasidasan, G. Chennupati, J. Bilmes, T. Bhattacharya, S. Michalak, "On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks," 33rd Conference on Neural Information Processing Systems(NeurlIPS), 2019.
  15. A. Kurakin, I.J. Goodfellow, S. Bengio, "Adversarial Examples in the Physical World," International Conference on Learning Representations(ICLR), 2017.
  16. C. Szgedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I.J. Goodfellow, R. Fergus, "Intriguing Properties of Neural Networks," Computer Vision and Pattern Recognition(CVPR), 2014.
  17. A. Laugros, A. Caplier, M. Ospici, "Addressing Neural Network Robustness with Mixup and Targete Labeling Adversarial Training," European Conference on Computer Vision(ECCV), 2020.