DOI QR코드

DOI QR Code

Real-Time Fire Detection based on CNN and Grad-CAM

CNN과 Grad-CAM 기반의 실시간 화재 감지

  • Kim, Young-Jin (Department of Computer Science & Engineering, Graduate School, Korea University of Technology and Education) ;
  • Kim, Eun-Gyung (School of Computer Science & Engineering, Korea University of Technology and Education)
  • Received : 2018.08.18
  • Accepted : 2018.09.19
  • Published : 2018.12.31

Abstract

Rapidly detecting and warning of fires is necessary for minimizing human injury and property damage. Generally, when fires occur, both the smoke and the flames are generated, so fire detection systems need to detect both the smoke and the flames. However, most fire detection systems only detect flames or smoke and have the disadvantage of slower processing speed due to additional preprocessing task. In this paper, we implemented a fire detection system which predicts the flames and the smoke at the same time by constructing a CNN model that supports multi-labeled classification. Also, the system can monitor the fire status in real time by using Grad-CAM which visualizes the position of classes based on the characteristics of CNN. Also, we tested our proposed system with 13 fire videos and got an average accuracy of 98.73% and 95.77% respectively for the flames and the smoke.

화재에 대한 신속한 예측과 경고는 인명 및 재산피해를 최소화시킬 수 있는 필수적인 요소이다. 일반적으로 화재가 발생하면 연기와 화염이 함께 발생하기 때문에 화재 감지 시스템은 연기와 화염을 모두 감지할 필요가 있다. 그러나 대부분의 화재 감지 시스템은 화염 혹은 연기만 감지하며, 화재 감지를 위한 전처리 작업을 추가함에 따라 처리 속도가 느려지는 단점이 있다. 본 연구에서는 다중 레이블 분류(Multi-labeled Classification)를 지원하는 CNN 모델을 구성해서 화염과 연기를 동시에 예측하고, CNN의 특징을 기반으로 클래스에 대한 위치를 시각화하는 Grad-CAM을 이용해서 실시간으로 화재 상태를 모니터링 할 수 있는 화재 감지 시스템을 구현하였다. 또한, 13개의 화재 동영상을 사용해서 테스트한 결과, 화염과 연기에 대해 각각 98.73%와 95.77%의 정확도를 보였다.

Keywords

HOJBC0_2018_v22n12_1596_f0001.png 이미지

Fig. 1 Configuration of the Fire Detection System using CNN and Grad-CAM

HOJBC0_2018_v22n12_1596_f0002.png 이미지

Fig. 2 Learning Curve Graph(accuracy, loss) for Training/Validation

HOJBC0_2018_v22n12_1596_f0003.png 이미지

Fig. 3 Visualization of Flame/Smoke using Grad-CAM

HOJBC0_2018_v22n12_1596_f0004.png 이미지

Fig. 4 Analysis for False Positive/Negative samples

Table. 1 Multi-labeled Dataset

HOJBC0_2018_v22n12_1596_t0001.png 이미지

Table. 2 Results of Validation and Test for Inception V3, Xception and Inception ResNet V2

HOJBC0_2018_v22n12_1596_t0002.png 이미지

Table. 3 Analysis of the test result

HOJBC0_2018_v22n12_1596_t0003.png 이미지

References

  1. G. Y. Lim, Y. B. Cho, "The Sentence Similarity Measure Using Deep-Learning and Char2Vec," Journal of the Korea Institute of Information and Communication Engineering, vol. 22, no. 10, pp. 1414-1417, Oct. 2018.
  2. S. J. Park, S. M. Choi, H. J. Lee, J. B. Kim, "Spatial analysis using R based Deep Learning," Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology, vol. 6, no. 4, pp. 1-8, Apr. 2016.
  3. Y. LeCun, et al, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 20, no. 11, pp. 2278-2324, 1998.
  4. R. R. Selvaraju, et al, "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization," In ICCV, pp. 618-626, 2017.
  5. T. Celik, H. Ozkaramanli, H. Demirel, "Fire and smoke detection without sensors: Image processing based approach," Signal Processing Conference, pp. 1794-1798, 2007.
  6. T. Celik, H. Demirel, "Fire Detection in Video Sequences Using a Generic Color Model," Fire Safety Journal, vol. 44, Issue 2, pp. 147-158, Feb. 2009. https://doi.org/10.1016/j.firesaf.2008.05.005
  7. W. B. Homg, J. W. Peng, and C. Y. Chen, "A new image-based real-time flame detection method using color analysis," in Proceding of the IEEE Networking, Sensing and Control, pp. 100-105, 2005.
  8. B.U. Toreyin, Y. Dedeoglu, and A. E. Cetin, "Wavelet based real-time smoke detection in video," Signal Processing Conference, 2005 13th European. IEEE, pp. 1-4, 2005.
  9. B.U. Toreyin, Y. Dedeoglu, U. Gudukbay, and A. E. Cetin, "Computer vision based method for real-time fire and flame detection." Pattern recognition letters, vol. 27, no. 1, pp. 49-58, Jan. 2006. https://doi.org/10.1016/j.patrec.2005.06.015
  10. P. V. K. Borges, J. Mayer, and E. Izquierdo, "Efficient visual fire detection applied for video retrieval," Signal Processing Conference, 2008 16th European, IEEE, pp. 1-5, 2008.
  11. R. Chi, Z. M. Lu, and Q. G. Ji, "Real-time multi-feature based fire flame detection in video," IET Image Processing, vol. 11, no. 1, pp. 31-37, Jan. 2017. https://doi.org/10.1049/iet-ipr.2016.0193
  12. Y. Wang, et al. "Fire smoke detection based on texture features and optical flow vector of contour," Intelligent Control and Automation (WCICA), 2016 12th World Congress on. IEEE, pp. 2879-2883, 2016.
  13. Q. Zhang, et al, "Deep convolutional neural networks for forest fire detection," Proceedings of the 2016 International Forum on Management, Education and Information Technology Application. Atlantis Press. 2016.
  14. K. J. Cheoi, and M. S. Jeon, "An Intelligent Fire Learning and Detection System Using Convolutional Neural Networks," KIPS Transactions on Software and Data Engineering, vol. 5, no. 11, pp. 607-614, Nov. 2016. https://doi.org/10.3745/KTSDE.2016.5.11.607
  15. Y. J. Kim, E. K. Kim, "Image based Fire Detection using Convolutional Neural Network," Journal of the Korea Institute of Information and Communication Engineering, vol. 20, no. 9, pp. 1649-1656, Sep. 2016. https://doi.org/10.6109/JKIICE.2016.20.9.1649
  16. B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, "Object detectors emerge in deep scene cnns," International Conference on Learning Representations, 2015.
  17. K. He, X. Zhang, S. Ren, and K. Sun, "Deep residual learning for image recognition," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
  18. C. Szegedy, et al, "Going deeper with convolutions," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015.
  19. F. Chollet, "Xception: Deep learning with depthwise separable convolutions," arXiv preprint, 1610-02357, 2017.
  20. B. Zhou, et al, "Learning deep features for discriminative localization," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921-2929, 2016.
  21. C. Szegedy, at el, "Inception-v4, inception-resnet and the impact of residual connections on learning," Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, vol. 4, pp. 12, 2017.
  22. D. Timothy, "Incorporating Nestrov Momentum into Adam," University of Stanford, Technical Report, 2016.
  23. National Institute of Standards and Technology [Internet]. Avaliable: https://www.nist.gov/.