Context-aware Video Surveillance System

  • An, Tae-Ki (Urban transit Research Center, Korea Railroad Research Institute, Korea / School of Information & Communication Engineering, Sungkyunkwan University) ;
  • Kim, Moon-Hyun (School of Information & Communication Engineering, Sungkyunkwan University)
  • Received : 2010.10.21
  • Accepted : 2011.08.16
  • Published : 2012.01.01


A video analysis system used to detect events in video streams generally has several processes, including object detection, object trajectories analysis, and recognition of the trajectories by comparison with an a priori trained model. However, these processes do not work well in a complex environment that has many occlusions, mirror effects, and/or shadow effects. We propose a new approach to a context-aware video surveillance system to detect predefined contexts in video streams. The proposed system consists of two modules: a feature extractor and a context recognizer. The feature extractor calculates the moving energy that represents the amount of moving objects in a video stream and the stationary energy that represents the amount of still objects in a video stream. We represent situations and events as motion changes and stationary energy in video streams. The context recognizer determines whether predefined contexts are included in video streams using the extracted moving and stationary energies from a feature extractor. To train each context model and recognize predefined contexts in video streams, we propose and use a new ensemble classifier based on the AdaBoost algorithm, DAdaBoost, which is one of the most famous ensemble classifier algorithms. Our proposed approach is expected to be a robust method in more complex environments that have a mirror effect and/or a shadow effect.


Grant : The 2nd Phase of R&D on the Urban Transit Standardization


  1. H.H. Nagel, "From Image Sequences Towards Conceptual Descriptions," in Image and Vision Computing, vol. 6, no. 2, pp. 59-74, May 1988.
  2. Thomas M. Strat, "Employing Contextual Information in Computer Vision", In Proceedings of ARPA Image Understanding Workshop, pp. 217-229, 1993.
  3. Gerard Medioni, Isaac Cohen, Francois Bremond, Somboon Hongeng and Ramakant Nevatia, "Event Detection and Analysis from Video Streams", IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, vo. 8, pp. 873-889, Aug 2001.
  4. Gary R. Bradski and James W. Davis, "Motion segmentation and pose recognition with motion history gradients", Int. Journal Machine Vision and Applications, vol. 13, vo. 3, pp. 174-184, 2002.
  5. Aaron Bobick and James Davis, "an appearance-based representation of action", in Proceedings of international conference on Pattern Recognition, vol. 1, pp. 307-312, Vienna, Austria, Aug 1996.
  6. Edward H. Adelson and James R. Bergen, "Spatiotemporal energy models for the perception of motion", Journal Optical Society of America, vol.2, vo.2, pp. 284-299, Feb. 1985.
  7. Robert E. Schapire and Yoram Singer, "Improved Boosting Algorithms Using Confidence-rated Predictions" Machine Learning, vol. 37, No. 3, pp. 297-336, Dec. 1999.
  8. Prem Melville and Raymond J. Mooney, "Creating diversity in ensembles using artificial data", Journal of Information Fusion, vol. 6, No. 1, pp. 99-111, Mar. 2004.
  9. Xuchun Li, Lei Wang and Eric Sung, "AdaBoost with SVM-based component classifiers", Engineering Applications of Artificial Intelligence 21, pp. 785-795, 2008.
  10. Lumila I. Kuncheva and Christopher J. Whitaker, "Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy", Machine Learning, vol. 51, no. 2, pp. 181-207, May 2003.
  11. Louisa Lam, "Classifier Combinations: Implementations and Theoretical Issues", Multiple Classifier Systems, Lecture Notes in Computer Science, vol. 1857, pp. 78-86, Cagliari, Italy, 2000.
  12. Ron Kohavi, "A study of cross-validation and bootstrap for accuracy estimation and model selection", in Proceedings of the 14th International Joint Conference on Artificial Intelligence, pp. 1137-1143, 1995.
  13. Thomas G. Dietterich, "An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization", Machine Learning, vol. 40, No. 2, pp. 139-157, Aug. 2000.
  14. Dianhong Wang and Liangxiao Jiang, "An improved attribute selection measure for decision tree induction", in Proceedings of Fourth International Conference on Fuzzy Systems and Knowledge Discovery, vol. 4, pp. 654-658, 2007.
  15. Paul Viola and Michael Jones, "Robust Real-time Object Detection", International Journal of Computer Vision, vol. 27, no. 2, pp. 137-154, 2004.
  16. Tin Kam Ho, "The Random Subspace Method for Constructing Decision Forests", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832-844, Aug 1998.
  17. Anders Kroggh and Jesper Vedelsby, "Neural Network Ensembles, Cross Validation and Active Learning", Advances in Neural Information Processing Systems, vol. 7, pp. 231-238, 1995.
  18. Ki-Yeol Eom, Tae-Ki AN, Gyu-Jin Kim, Gyu-Jin Jang, Moon-Hyun Kim, "Fast Object Tracking in Intelligent Surveillance System", LNCS 5593, pp. 749-763, July, 2009.
  19. Ki-Yeol Eom, Tae-Ki AN, Gyu-Jin Kim, Gyu-Jin Jang, Jae-Young Jung, Moon-Hyun Kim, "Hierarchically Categorized Performance Evaluation Criteria for Intelligent Surveillance System", in Proceedings of the 2009 International Symposium on Web Information Systems and Applications, pp. 223-226, May. 2009.
  20. Dong-Min Woo, Quoc-Dat Nguyen, "3D Building Detection and Reconstruction from Aerial Images Using Perceptual Organization and Fast Graph Search", Journal of Electrical Engineering & Technology, Vol. 3, No. 3, pp. 436-443, Sep. 2008.

Cited by

  1. Multi-Sensor Signal based Situation Recognition with Bayesian Networks vol.9, pp.3, 2014,
  2. Directional pedestrian counting with a hybrid map-based model vol.13, pp.1, 2015,
  3. A heuristic search-based motion correspondence algorithm using fuzzy clustering vol.10, pp.3, 2012,
  4. Precision Security: Integrating Video Surveillance with Surrounding Environment Changes vol.2018, pp.1099-0526, 2018,