DOI QR코드

DOI QR Code

An Analysis on the Properties of Features against Various Distortions in Deep Neural Networks

  • Kang, Jung Heum (Department of Computer Science and Engineering, Kyung Hee Univ.) ;
  • Jeong, Hye Won (Department of Computer Science and Engineering, Kyung Hee Univ.) ;
  • Choi, Chang Kyun (Department of Computer Science and Engineering, Kyung Hee Univ.) ;
  • Ali, Muhammad Salman (Department of Computer Science and Engineering, Kyung Hee Univ.) ;
  • Bae, Sung-Ho (Department of Computer Science and Engineering, Kyung Hee Univ.) ;
  • Kim, Hui Yong (Department of Computer Science and Engineering, Kyung Hee Univ.)
  • 투고 : 2021.10.25
  • 심사 : 2021.11.29
  • 발행 : 2021.12.20

초록

Deploying deep neural network model training performs remarkable performance in the fields of Object detection and Instance segmentation. To train these models, features are first extracted from the input image using a backbone network. The extracted features can be reused by various tasks. Research has been actively conducted to serve various tasks by using these learned features. In this process, standardization discussions about encoding, decoding, and transmission methods are proceeding actively. In this scenario, it is necessary to analyze the response characteristics of features against various distortions that may occur in the data transmission or data compression process. In this paper, experiment was conducted to inject various distortions into the feature in the object recognition task. And analyze the mAP (mean Average Precision) metric between the predicted value output from the neural network and the target value as the intensity of various distortions was increased. Experiments have shown that features are more robust to distortion than images. And this points out that using the feature as transmission means can prevent the loss of information against the various distortions during data transmission and compression process.

키워드

과제정보

This work was supported by Institute of Information Communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00011, Video Coding for Machine). This research was a result of a study on the NIPA.

참고문헌

  1. Ren, S., He, K., Girshick, R., & Sun, J. ''Faster r-cnn: Towards real-time object detection with region proposal networks.'' Advances in neural information processing systems, 28, pp.91-99, 2015
  2. Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollar, P. " Focal loss for dense object detection. '' In Proceedings of the IEEE international conference on computer vision. p. 2980-2988, 2017.
  3. He, K., Gkioxari, G., Dollar, P., & Girshick, R. "Mask r-cnn.'' In Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017.
  4. He, K., Zhang, X., Ren, S., & Sun, J. "Deep residual learning for image recognition.'' In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
  5. Lin, T. Y., Dollar, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. "Feature pyramid networks for object detection.'' In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117-2125, 2017.
  6. Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars,and L. Tang, "Neurosurgeon: Collaborative intelligence between the cloud and mobile edge." in Proc. 22nd ACM Int.Conf. Arch. Support Programming Languages and Operating System, pp. 615-629, 2017
  7. Bajic, Ivan V., Weisi Lin, and Yonghong Tian. "Collaborative intelligence: Challenges and opportunities.'' ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021.
  8. Duan, L., Liu, J., Yang, W., Huang, T., & Gao, W. "Video coding for machines: A paradigm of collaborative compression and intelligent analytics.'' IEEE Transactions on Image Processing, 29, 8680-8695, 2020. https://doi.org/10.1109/tip.2020.3016485
  9. Girshick, R., Donahue, J., Darrell, T., & Malik, J. " Rich feature hierarchies for accurate object detection and semantic segmentation.'' In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580-587, 2014.
  10. Girshick, R. "Fast r-cnn.'' In Proceedings of the IEEE international conference on computer vision, pp. 1440-1448, 2015.
  11. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, ''MobileNets: Efficient convolutional neural networks for mobile vision applications,'' 2017, arXiv:1704.04861.
  12. Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., ... & Zitnick, C. L. "Microsoft coco: Common objects in context.'' In European conference on computer vision, pp. 740-755, 2014.
  13. Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A.,et.al, "Quantization and training of neural networks for efficient integer-arithmetic-only inference.'' In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2704-2713, 2018.
  14. Yang, W., Zhang, X., Tian, Y., Wang, W., Xue, J. H., & Liao, Q. "Deep learning for single image super-resolution: A brief review. '' IEEE Transactions on Multimedia, 21(12), 3106-3121, 2019. https://doi.org/10.1109/tmm.2019.2919431
  15. Fan, L., Zhang, F., Fan, H., & Zhang, C. "Brief review of image denoising techniques.'' Visual Computing for Industry, Biomedicine, and Art, 2(1), 1-12, 2019. https://doi.org/10.1186/s42492-019-0012-y
  16. Lainema, J., Bossen, F., Han, W. J., Min, J., & Ugur, K. "Intra coding of the HEVC standard.'' IEEE transactions on circuits and systems for video technology, 22(12), 1792-1801, 2012. https://doi.org/10.1109/TCSVT.2012.2221525
  17. Kuznetsova, Alina, et al. "The open images dataset v4." International Journal of Computer Vision 128.7, 1956-1981, 2020. https://doi.org/10.1007/s11263-020-01316-z