JOURNAL BROWSE
Search
Advanced SearchSearch Tips
Construction of a Video Dataset for Face Tracking Benchmarking Using a Ground Truth Generation Tool
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Construction of a Video Dataset for Face Tracking Benchmarking Using a Ground Truth Generation Tool
Do, Luu Ngoc; Yang, Hyung Jeong; Kim, Soo Hyung; Lee, Guee Sang; Na, In Seop; Kim, Sun Hee;
  PDF(new window)
 Abstract
In the current generation of smart mobile devices, object tracking is one of the most important research topics for computer vision. Because human face tracking can be widely used for many applications, collecting a dataset of face videos is necessary for evaluating the performance of a tracker and for comparing different approaches. Unfortunately, the well-known benchmark datasets of face videos are not sufficiently diverse. As a result, it is difficult to compare the accuracy between different tracking algorithms in various conditions, namely illumination, background complexity, and subject movement. In this paper, we propose a new dataset that includes 91 face video clips that were recorded in different conditions. We also provide a semi-automatic ground-truth generation tool that can easily be used to evaluate the performance of face tracking systems. This tool helps to maintain the consistency of the definitions for the ground-truth in each frame. The resulting video data set is used to evaluate well-known approaches and test their efficiency.
 Keywords
Face Tracking;Ground-truth;Face Video Dataset;
 Language
English
 Cited by
 References
1.
R. Gross, Handbook of Facial Recognition, NY, USA, Springer, 2005.

2.
D. Comaniciu and V. Ramesh, "Robust detection and tracking of human faces with an active camera," The Third IEEE International Workshop on Visual Surveillance, 2000, pp. 11-18.

3.
M. Kim, S. Kumar, V. Pavlovic, and H. Rowley, "Face tracking and recognition with visual constraints in real-world videos," Conference on Computer Vision and Pattern Recognition, no. 1, 2008, pp. 1-8.

4.
E. Maggio, E. Piccardo, C. Regazzoni, and A. Cavallaro, "Particle PHD filtering for multi-target visual tracking," ICASSP, vol. 1, 2007, pp. 1101-1104.

5.
F. Dornaika and J. Orozco, "Real time 3D face and facial feature tracking," Journal of real-time image processing, vol. 2, 2007, pp. 35-44. crossref(new window)

6.
M. D. Cordea, E. M. Petriu, and D. C. Petriu, "Three-Dimensional Head Tracking and Facial Expression Recovery Using an Anthropometric Muscle-Based Active Appearance Model," Transactions on Instrumentation and Measurement, 2008, pp. 1578-1588.

7.
G. R. Bradski, "Computer Vision Face Tracking for Use in a Perceptual User Interface," Intel Technology Journal, 2(2), 1998, pp. 13-27.

8.
K. Nummiaro, E. Koller-Meier, and L. J. Van Gool, "An adaptive color-based particle filter," Image Vision Computing, 21(1), 2003, pp. 99-110. crossref(new window)

9.
B. Martinkauppi, M. Soriano, S. Huovinen, and Laaksonen, "Face video database," Proc. First European Conference on Color in Graphics, Imaging and Vision, 2002, pp. 380-383.

10.
http://www.cvg.cs.rdg.ac.uk/PETS2001/pets2001-dataset.html

11.
http://www.tele.ucl.ac.be/PROJECTS/M2VTS/m2fdb.html

12.
T. List, J. Bins, J. Vazquez, R. B. Fisher, "Performance evaluating the evaluator," ICCCN, 2005, pp. 129-136.

13.
http://fipa.cs.kit.edu/507.php

14.
Y. Li, A. Dore, and J. Orwell, "Evaluating the performance of systems for tracking football players and ball," Proceedings of the IEEE International Conference on Advanced Video and Signal Based Surveillance, 2005, pp. 632-637.

15.
A. T. Nghiem, F. Bremond, M. Thonnat, and V. Valentin, "ETISEO, performance evaluation for video surveillance systems," Proceedings of the IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS '07), London, UK, September 2007, pp. 476-481.

16.
K. Smith, D. Gatica-Perez, J. Odobez, and S. Ba, "Evaluating multi-object tracking," Proceedings of the IEEE Workshop on Empirical Evaluation Methods in Computer Vision (EEMCV '05), San Diego, Calif, USA, vol. 3, June 2005, p. 36.

17.
K. Bernadin and R. Stiefelhagen, "Evaluating Multiple Object Tracking Performance The CLEAR MOT Metrics," EURASIP Journal on Image and Video Processing, 2008, pp.1-11.

18.
J. Xiao, S. Baker, I. Matthews, and T. Kanade, "Real-Time Combined 2D+3D Active Appearance Models," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2004, pp. 535-542.

19.
R. Stolkin, I. Florescu, and G. Kamberov, "An adaptive background model for camshift tracking with a moving camera," Proc. International Conference on Advances in Pattern Recognition, 2007, pp. 147-151.

20.
P. Viola and M. J. Jones, "Robust Real-Time Face Detection," International Journal of Computer Vision, 57(2), 2004, pp. 137-154. crossref(new window)

21.
C. Bao, Y. Wu, H. Ling, and H. Ji, "Real time robust L1 tracker using accelerated proximal gradient approach," CVPR, 2012, pp. 1830-1837.

22.
Z. Xiao, H. Lu, and D. Wang, "Object tracking with L2-RLS," ICPR, 2012, pp. 1351-1354.