DOI QR코드

DOI QR Code

Multimodal layer surveillance map based on anomaly detection using multi-agents for smart city security

  • Shin, Hochul (Department of Intelligent Robotics, Electronics and Telecommunications of Research Institute) ;
  • Na, Ki-In (Department of Intelligent Robotics, Electronics and Telecommunications of Research Institute) ;
  • Chang, Jiho (Department of Intelligent Robotics, Electronics and Telecommunications of Research Institute) ;
  • Uhm, Taeyoung (Intelligent Robotics R&D Division, Korea Institute of Robotics and Technology Convergence)
  • Received : 2021.10.27
  • Accepted : 2022.02.20
  • Published : 2022.04.10

Abstract

Smart cities are expected to provide residents with convenience via various agents such as CCTV, delivery robots, security robots, and unmanned shuttles. Environmental data collected by various agents can be used for various purposes, including advertising and security monitoring. This study suggests a surveillance map data framework for efficient and integrated multimodal data representation from multi-agents. The suggested surveillance map is a multilayered global information grid, which is integrated from the multimodal data of each agent. To confirm this, we collected surveillance map data for 4 months, and the behavior patterns of humans and vehicles, distribution changes of elevation, and temperature were analyzed. Moreover, we represent an anomaly detection algorithm based on a surveillance map for security service. A two-stage anomaly detection algorithm for unusual situations was developed. With this, abnormal situations such as unusual crowds and pedestrians, vehicle movement, unusual objects, and temperature change were detected. Because the surveillance map enables efficient and integrated processing of large multimodal data from a multi-agent, the suggested data framework can be used for various applications in the smart city.

Keywords

Acknowledgement

This work was supported by the ICT R&D program of MSIP/IITP (2017-0-00306, Development of Multimodal Sensor-based Intelligent Systems for Outdoor Surveillance Robots).

References

  1. J.-W. Choi, D. Moon, and J.-H. Yoo, Robust multi-person tracking for real-time intelligent video surveillance, ETRI J. 37 (2015), no. 3, 551-561. https://doi.org/10.4218/etrij.15.0114.0629
  2. S. Meghana, T. V. Nikhil, R. Murali, S. Sanjana, R. Vidhya, and K. J. Mohammed, Design and implementation of surveillance robot for outdoor security, (Proc. IEEE International Conference on Recent Trends in Electronics, Information Communication Technology, Bangalore, India), May 2017, pp. 1679-1682.
  3. P. Chakravarty, A. M. Zhang, R. Jarvis, and L. Kleeman, Anomaly detection and tracking for a patrolling robot, (Australasian Conference on Robotics and Automation, Brisbane, Australia). Dec. 2007.
  4. C. Zhang, Q. Zhan, Q. Wang, H. Wu, T. He, and Y. An, Autonomous dam surveillance robot system based on multi-sensor fusion, Sensors 20 (2020), no. 4. https://doi.org/10.3390/s20041097
  5. D. D. Paola, A. Milella, G. Cicirelli, and A. Distante, An autonomous mobile robotic system for surveillance of indoor environments, Int. J. Adv. Robotic Syst. 7 (2010), no. 1. https://doi.org/10.5772/7254
  6. M. Ma, S. M. Preum, M. Y. Ahmed, W. Tarneberg, A. Hendawi, and J. A. Stankovic, Data sets, modeling, and decision making in smart cities: A survey, ACM Trans. Cyber-Phys. Syst. 4 (2019), no. 2, 1-28. https://doi.org/10.1145/3355283
  7. M. Grgic, K. Delac, and S. Grgic, Scface-surveillance cameras face database, Multimedia Tools Appl. 51 (2011), no. 3, 863-879. https://doi.org/10.1007/s11042-009-0417-2
  8. Z. Zhang, Y. Song, and H. Qi, Age progression/regression by conditional adversarial autoencoder, (IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA), July 2017. https://doi.org/10.1109/CVPR.2017.463
  9. R. Rothe, R. Timofte, and L. V. Gool, Deep expectation of real and apparent age from a single image without facial landmarks, Int. J. Comput. Vision 126 (2018), no. 2-4, 144-157. https://doi.org/10.1007/s11263-016-0940-3
  10. K. Chen and J.-K. Kamarainen, Pedestrian density analysis in public scenes with spatiotemporal tensor features, IEEE Trans. Intell. Transp. Syst. 17 (2016), no. 7, 1968-1977. https://doi.org/10.1109/TITS.2016.2516586
  11. H. Idrees, I. Saleemi, C. Seibert, and M. Shah, Multi-source multi-scale counting in extremely dense crowd images, (IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA), June 2013, pp. 2547-2554.
  12. R. Guerrero-Gomez-Olmedo, B. Torre-Jimenez, R. L opezSastre, S. Maldonado-Bascon, and D. O. Noro Rubio, Extremely overlapping vehicle counting, (Iberian Conference on Pattern Recognition and Image Analysis, Spain), June 2015. https://doi.org/10.1007/978-3-319-19390-8_48
  13. A. Patron-Perez, M. Marszalek, I. Reid, and A. Zisserman, Structured learning of human interactions in tv shows, IEEE Trans. Pattern Anal. Mach. Intell. 34 (2012), no. 12, 2441-2453. https://doi.org/10.1109/TPAMI.2012.24
  14. K. Soomro, A. R. Zamir, and M. Shah, Ucf101: A dataset of 101 human actions classes from videos in the wild, 2012, arXiv preprint arXiv:1212.0402.
  15. P. Foggia, A. Saggese, and M. Vento, Real-time fire detection for video surveillance applications using a combination of experts based on color, shape and motion, IEEE Trans. Circ. Syst. Video Technol. 25 (2015), 1545-1556. https://doi.org/10.1109/TCSVT.2015.2392531
  16. Z. Dong, Y. Wu, M. Pei, and Y. Jia, Vehicle type classification using a semisupervised convolutional neural network, IEEE Trans. Intell. Transp. Syst. 16 (2015), no. 4, 2247-2256. https://doi.org/10.1109/TITS.2015.2402438
  17. J. Arrospide, L. Salgado, and M. Nieto, Video analysis-based vehicle detection and tracking using an mcmc sampling framework, EURASIP J. Adv. Sig. Process. 2012 (2012), no. 1, 1-20. https://doi.org/10.1186/1687-6180-2012-1
  18. Y. Bi, C. Lin, H. Zhou, P. Yang, X. Shen, and H. Zhao, Time-constrained big data transfer for SDN-enabled smart city, IEEE Commun. Mag. 55 (2017), no. 12, 44-50. https://doi.org/10.1109/mcom.2017.1700236
  19. J.-Y. Lee and W. Yu, Robust self-localization of ground vehicles using artificial landmark, (International Conference on Ubiquitous Robots and Ambient Intelligence, Kuala Lumpur, Malaysia), Nov. 2014, pp. 303-307.
  20. J. Redmon and A. Farhadi, Yolov3: An incremental improvement, 2018. https://doi.org/10.48550/arXiv.1804.02767
  21. J.-Y. Lee, S. Choi, and J. Lim, Detection of high-risk intoxicated passengers in video surveillance, (IEEE International Conference on Advanced Video and Signal Based Surveillance, Auckland, New Zealand), Nov. 2018, pp. 1-6.
  22. H. Shin and J.-Y. Lee, Pedestrian video data abstraction and classification for surveillance system, (International Conference on Information and Communication Technology Convergence, Jeju, Rep. of Korea), Oct. 2018. https://doi.org/10.1109/ICTC.2018.8539426
  23. W. S. Park, and Y. B. Kim, Anomaly detection in particulate matter sensor using hypothesis pruning generative adversarial network, ETRI J. 43 (2021), 511-523. https://doi.org/10.4218/etrij.2020-0052
  24. H. Shin and K. Na, Anomaly detection algorithm based on global object map for video surveillance system, (International Conference on Control, Automation and Systems, Busan, Rep. of Korea), Oct. 2020. https://doi.org/10.23919/ICCAS50221.2020.9268258
  25. H. Shin and K. Na, Anomaly detection using elevation and thermal map for security robots, (International Conference on Information and Communication Technology Convergence, Jeju, Rep. of Korea), Oct. 2020. https://doi.org/10.1109/ICTC49870.2020.9289470
  26. K. Lee, K. Lee, H. Lee, and J. Shin, A simple unified framework for detecting out-of-distribution samples and adversarial attacks, (Proceedings of International Conference on Neural Information Processing Systems, Montreal, Canada), 2018, pp. 7167-7177.
  27. T. Uhm, G.-D. Bae, J. Lee, and Y.-H. Choi, Multi-modal sensor calibration method for intelligent unmanned outdoor security robot, (Proceedings of the Sixth International Conference on Green and Human Information Technology), 2019, pp. 215-220.
  28. G.-D. Bae, T. Uhm, Y.-H. Choi, and J.-H. Hwang, Study on multi-modal sensor system based sematic navigation map building, (International Conference on Control, Automation and Systems, Busan, Rep. of Korea), Oct. 2020, pp. 1195-1197.
  29. M. F. Sabokrou and M. Hoseini, Video anomaly detection and localisation based on the sparsity and reconstruction error of auto-encoder, Electron. Lett. 52 (2016), no. 13, 1122-1124. https://doi.org/10.1049/el.2016.0440
  30. S. R. Kaiming He and J. Sun, Deep residual learning for image recognition, (Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA), 2016. https://doi.org/10.1109/CVPR.2016.90
  31. L. M. Gao Huang and K. Q. Weinberger, Densely connected convolutional networks, (Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA), 2017. https://doi.org/10.1109/CVPR.2017.243