DOI QR코드

DOI QR Code

A Novel Multi-view Face Detection Method Based on Improved Real Adaboost Algorithm

  • Xu, Wenkai (Department of Information Communications and Engineering, University of Tongmyong) ;
  • Lee, Eung-Joo (Department of Information Communications and Engineering, University of Tongmyong)
  • Received : 2013.04.23
  • Accepted : 2013.11.09
  • Published : 2013.11.30

Abstract

Multi-view face detection has become an active area for research in the last few years. In this paper, a novel multi-view human face detection algorithm based on improved real Adaboost is presented. Real Adaboost algorithm is improved by weighted combination of weak classifiers and the approximately best combination coefficients are obtained. After that, we proved that the function of sample weight adjusting method and weak classifier training method is to guarantee the independence of weak classifiers. A coarse-to-fine hierarchical face detector combining the high efficiency of Haar feature with pose estimation phase based on our real Adaboost algorithm is proposed. This algorithm reduces training time cost greatly compared with classical real Adaboost algorithm. In addition, it speeds up strong classifier converging and reduces the number of weak classifiers. For frontal face detection, the experiments on MIT+CMU frontal face test set result a 96.4% correct rate with 528 false alarms; for multi-view face in real time test set result a 94.7 % correct rate. The experimental results verified the effectiveness of the proposed approach.

Keywords

1. Introduction

Over the past decade, face recognition has emerged as an active research area in computer vision with numerous potential applications including biometrics, surveillance, human-computer interaction, video-mediated communication and content-based access of images and video databases.

Face detection is the first step in automated face recognition. Its reliability has a major influence on the performance and usability of the entire face recognition system. Given a single image or a video, and ideal face detector should be able to identify and locate all the present faces regardless of their position, scale, orientation, age, and expression. Furthermore, the detection should be irrespective of extraneous illumination conditions and the image and video content. Face detection can be performed based on several cues: skin color, motion, facial or head shape, facial appearance, or a combination of these parameters [1].

Most successful face detection algorithms are appearance-based without using other cues. The processing is done as follow: An input image is scanned at all possible locations and scales by a sub-window. Face detection is posed as classifying the pattern in the sub-window as either face or non-face. The face or non-face classifier is learned from face and non-face training examples using statistical learning methods.

Statistical methods have been widely adopted in face detection [2]. Moghaddam and Pentland [3-6] introduced the Eigenface method, where the probability of face patterns is modeled by the “distance-in-feature-space” (DIFS) and “distance-from-feature-space” (DFFS) criteria. Osuna et al. [7,8] presented an SVM-based approach to frontal-view face detection. Unlike the Eigenface method where only the positive density is estimated, this approach seeks to learn the boundary between face and non-face patterns. After learning, only the ‘important’ examples located on the boundary are selected to build the decision function. Soulie et al. [9] described a system using neural networks (NNs) for face detection. They implemented a multi-modal architecture where various rejection criteria are employed to trade-off false recognition against false rejection. Sung and Poggio [10] also presented a NN based face detection system. They designed six positive prototypes (faces) and six negative prototypes (non-faces) in the hidden layer. Supervised learning is performed to determine the weights of these prototypes to the output node. Rowley et al. [11] introduced a NN based upright frontal face detection system. A retinal connected NN examines small windows of an image and decides whether each window contains a face. This work was later extended to rotation invariant face detection by designing an extra network to estimate the rotation of faces in the image plane [12,13].

Multi-view face detection (MVFD) is used to detect upright faces in images that with ±90-degreee rotation-out-of plane (ROP) pose changes. Rotation invariant means to detect faces with 360-degree rotation-in-plane (RIP) pose changes [14].

Recently, Viola and Jones [15] presented an approach to fast face detection using simple rectangle features which can be efficiently computed from the so-called Integral Image. This system prompted the development of more general systems such as rotation invariant frontal face detection and MVFD [16]. Adaboost and cascade methods are then used to train a face detector based on these features. Li et al. [16,17] adopted similar but more general features which can be computed from block differences. Also, Floatboost is proposed to overcome the monotonicity of the sequential Adaboost learning. Existing work on MVFD includes Schneiderman et al.’s work [18] based on Bayesian decision rule and Li et al.’s [16] pyramid-structured detector which was reported as the first real-time MVFD system. In Hongming Zhang et al.’s work [19], they emphasized on designing efficient binary classifiers by learning informative features through minimizing the error rate of the ensemble ECOC multi-class classifier. In order to meet the needs of various applications, a real-time rotation invariant MVFD system is the ultimate goal. Although present MVFD methods can be applied to this problem by means of rotating images and repeating the procedure, the process becomes time consuming and the rate of false alarm increases.

In this paper, we propose a novel improved real Adaboost algorithm for MVFD based on Schapire and Singer’s improved Adaboost classifiers [20]. LUT weak classifier is used to train processed Haar feature (mirror and rotation) by our real Adaboost algorithm. In addition, a pose estimator based on confidence of strong classifier is proposed in this paper, which used first four layers to estimate face pose. We tested our MVFD method on CMU and MIT face database, the experimental results show us that the performance of our detection system is better than discrete Adaboost and real Adaboost algorithm, in addition, it speeds up strong classifier converging, reduces the number of weak classifiers and decreases detection time. The detection results and its timeliness are satisfactory.

 

2. Improved Real Adaboost Learning Algorithm

2.1 Real Adaboost Algorithm

For Adaboost learning, a complex nonlinear strong classifier HM (x) is constructed as a linear combination of M simpler, easily constructible weak classifiers in the following form. The Adaboost learning procedure is aimed at learning a sequence of best weak classifiers to combine hm(x) and the combining weights αm in:

It solves the following three fundamental problems: (1) learning effective features from a large features set; (2) constructing weak classifiers, each of which is based on one of the selected features; and (3) boosting the weak classifiers to construct a strong classifier.

Real Adaboost algorithm deals with a confidence-rated weak classifier that is a map from a sample space X to a real-valued space R instead of Boolean prediction, it has the following form [20], [21]:

● Given dataset S = {(x1, y1), ... ,(xm, ym)};

Where(x1, y1) ∈ X × {-1, +1}, the weak classifier pool H and the number of weak classifiers to the selected T.

● Initialize the sample distribution D1(i)= 1/m.

● For t = 1, ... ,T

(1). For each weak classifier h in H do:

where l = ±1.

where ε is a small positive constant.

(2). Select the ht minimizing Z, i.e.

(3). Update the sample distribution

And normalize Dt+1 to a probability distribution function.

● The final strong classifier H is

It can be seen that (2) and (3) define the output of the weak classifier, so all that is left to the weak learner is to partition the domain X.

2.2 Improved Real Adaboost Algorithm

The real Adaboost algorithm proposed by Schapire et al. [20] extended the criterion of discrete two-value to output of continuous confidence. This algorithm selected Equation (3) as the output; all of each weak classifier will have this judgment function as well; the core objective is to confirm a partition on these datasets. So, we do some improvement on partition of datasets and the control of weights adjustment.

● Given dataset S = {(x1, y1), ... ,(xm, ym)};

Where(x1, y1) ∈ X × {-1, +1}, the weak classifier pool H and the number of weak classifiers to the selected T , the number of features for classifier is s, we give three weights parameters as wbase, wfactor, windex.

● Initialization

(1) Initialize the sample distribution.

(2) Initialize the Gray value interval distribution of Haar feature.

Preprocess each Haar feature corresponding to weak classifier, divide range evenly into n sub-ranges. A partition on the range corresponds to a partition on X : X1k , ... ,Xnk , K=1,2, ...,s.

● For t = 1, T

(1) For each weak classifier hʹ in Hʹ do:

where l = ±1, k=1,…,s, j=1, …,n.

where ε is a small positive constant.

(2). Select the hʹt minimizing Z’, i.e.

(3). Update the sample distribution

For training weight value βt as

(4). Normalize Dʹt+1 to a probability distribution function

● The final strong classifier H is

Where b is a threshold whose default is zero. The confidence of H is defined as

2.3 Analysis of Improved Real Adaboost Algorithm

Original real Adaboost algorithm used local optimum criteria, the convergence is better than discrete Adaboost algorithm; but the time complexity of real Adaboost algorithm is still large. The number of training samples is defined as N, the number of selecting class features as M, and the number of weak classifiers for strong classifier as T. The time complexity of weak classifier is O(M * N * N), and the time complexity of strong classifier is O(T * M * N * N).

The improved real Adaboost algorithm we proposed used weak classifiers h to divide sample space X into n sub-ranges evenly; each h corresponding to space division X1k, … , Xnk can be obtained, and no longer to redistribute training sample space while selecting the best weak classifier each step, by only using the updated weights distribution Dtʹ, and calculating Wʹlj under Dtʹ by addition. So the complexity for each weak classifier is only depended on the calculated amount of training sample weights by addition statistics. The time complexity of weak classifier is O(M * N), and the time complexity of strong classifier is O(T * M * N), the training speed is enhanced by O(N).

Real Adaboost algorithm is improved by weights adjustment and partition of datasets. Reference [22] had proved that pre-partition on datasets is practicable; we will prove that the control of weights adjustment is as well.

ht (x) is on the partition S = S1 U S2 U … U Sn, the definition of Wlj is as before. We define a group of stochastic variable as:

Let

The strong classifier of real Adaboost algorithm is the sum of all weak classifiers, now we improve it by adding weights as a sum of weights added, the strong classifier is defined as:

βt is given as (16), its mean value and variance can be expressed as:

Let which mean value and variance are μ and the theorem below could be proved.

THEOREM: When ht (x) is independent, μ ˃ 0 and combination coefficients the error rate of H(x) is ; when T is very large, if is bounded but is large, these combination coefficients approximately are the best.

PROOF: The error rate of H(x)can be regarded as the probability of R ≤ 0, namely ε = Pr[R ≤ 0]. As H(x) has a error if and only if and it is equivalent to

We write the probability density function of R as g(r), so:

And

Namely σ2 / μ2 can obtain the minimum when input (21) into Equation (20), we can obtain:

So far, first half of the the theorem is proved. Next step, we will prove that whenT → ∞, the minimum point of σ2 / μ2 also is the minimum point of ε = Pr[R ≤ 0] When both of the mean value and variance of are βtRt are we write it as and then σ2 / μ2 = 1 / θ . As is bounded, on the basis of limit theorem; when T → ∞ is the normal distribution with mean value = θ / T and variance is standard normal distribution. For stochastic variable Y, which is belonging to standard normal distribution, when v → ∞ , is the monotone function about v. When T → ∞ and θ → ∞ ,

So ε is minimal when σ2 / μ2 = 1 / θ is minimal.

This shows that the ratio of mean-variance could be regarded as a normalization, because that of yiβtht(xi) is 1; the theorem indicates that the confidence of weak classifiers are better under normalization condition. The weight should adjusted by

 

3. Haar Feature for Multi-View Face Detection

The rectangular masks used for visual object detection are rectangles tessellated by black and white smaller rectangles. Those masks are designed in correlation to visual recognition tasks to be solved, and known as Haar-like wavelets. By convolution with a given image they produce Haar-like features. Viola [15] has used four features (Fig. 1(a)~(d)) for face detection and these features performed well on face. Fasel [23] has proposed a new ones (see Fig. 1(e)).

Fig.1.Face detection feature that proposed by Viola and Fasel; (a) , (b) edge features, (c) Line feature, (d) diagonal feature, (e) center surround features.

This study aims to solve multi-view face detection (MVFD), namely to detect upright faces in images that with ±90° rotation-out-of-plane (ROP) changes and rotation invariants, which means to detect faces with 360° rotation-in-plane (RIP) pose changes. According to ROP angle, 180-degree isdivided into [-90° -75°], [-75° -45°], [-45° -15°], [-15° +15°], [+15° +45°], [+45° +75°], [+75° +90°]. According to the RIP angle, faces are divided into twelve categories, each covering 30 degree. So there are total 7*12 view categories corresponding to 84 detectors. Based on Adaboost algorithm, 84 categories face detectors should be designed for 84 categories multi-view faces, it is an enormous work for training. Since the Haar features can be flipped horizontally and rotated 90 degree, some of others can be generated from the original ones, see Fig. 2 and Table 1.

Table 1.Geometric Transformation of Haar Feature (“R”means Rotate 90° and “M”means Mirror Transform)

Fig. 2.Mirror and rotate detectors.

The original detectors of frontal view are the 60-degree and 90-degree ones. The original detectors of half profile view are the left half profile, whicharethe 60-degree, 90-degree and 120-degree ones. Full profile situation is the same as half profile.

According to the analysis above, it should build 7*12=84 categories classifiers originally, but it only needs4*3=12 categories classifiers now. “4”means four angle regions of seven regions based on flipping horizontally, which cannotbe obtained by transformation, and “3”means containing at least 0°, 30° and 60° in twelve categories according to RIP angle. Other situationscan be obtained by transformation of Haar feature as we introduced.

Adaboost algorithm is a learning algorithm that selects a set of weak classifier from a large hypothesisspace to construct a strong classifier. Basically speaking, the performance of the final strong classifier originates in the characteristics of its weak hypothesisspace. In [12], threshold weak classifiers are used as the hypothesis space input to boosting procedure, the main disadvantage of the threshold model is too simple to fit complex distributions. So, in this paper we use LUT weak classifier [24] in order to use real Adaboost algorithm. Therefore we use a real-valued LUT weak classifier.

Assuming fHaar is the Haar feature and fHaar has been normalized to [0, 1], the range is divided evenly into n sub-ranges, then the j-th LUT item corresponding to a sub-range:

A partition on the range corresponds to a partition on X . Thus, the weak classifier can be defined as: if fHaar(x) ∈ binj, then

Where l = ±1, j = 1, … , n. Given the characteristic function

Where j = 1, … ,n.

The LUT weak classifier can be formally expressed as:

LUT classifier almost can simulate any kinds of probability distribution. When size of sample sub-window is confirmed, all of candidate Haar feature will be confirmed as well, each Haar feature corresponds to one LUT weak classifier, which make up our weak classifiers space H.

Each layer of cascade is strong classifier training by improved real Adaboost algorithm we proposed. We set up a threshold of each layer as b as Equation (17), let 99.9% of face can pass it, and try best to throw out counterexample. The classifiers on the later place are more complexity, which contains more LUT-based classifiers; therefore, they have stronger classification capability.

The improved real Adaboost algorithm performs better than the Discrete Adaboost algorithm in our cascade classifier since the real Adaboost algorithm can output a more continuous confidence value (as Equation (18)) than Discrete Adaboost algorithm, and our LUT weak classifier functions especially well with the Real Adaboost algorithm. The training algorithm can be explained as follows:

● Setting the maximum false positive rate per layer as f, the minimum passing rate per layer as d, and the overall false positive rate as Ftarget. Given the positive training set as Pos, and the negative training set as Neg.

● Initialize: Fall = 1, i = 1.

● While Fi ˃ Ftarget

1) Training i-th layer by using Pos and Neg; setting threshold value b to let false positive rate fi lower than f, and let passing rate higher than d.

2) Fi+1 = Fi × Fi; i = i + 1; Neg ← ∞.

3) If Fi+1 ˃ Ftarget, scanning non-face images using current cascade classifier, collecting all the false positive set Neg.

 

4. Pose Estimation based on Cascade Classifier

Ideally, it needs to scan all of candidate sub-windows by trained face detector in linear detection process. But it will take too long time for performing in real time. So we present a strategy for improving timeliness-pose estimation.

Rowley firstly presented pose estimation in his multi-view face detection based on ANN [25], he designed some sub-classes according to different face poses, and then to train an ANN for these classes, called estimator. When the system decides whether sub-windows belong to face or not in the input image, it should estimate the pose firstly using pose estimator and then sent these sub-windows to corresponding face detectors for processing again. Obviously, the pose estimator has a direct influence on detection results.

In our study, we do not train a pose estimator separately, instead to use the confidence of strong classifier, which defined by Equation (18) as before. Suppose there are d detectors multi-view-based and each detector has n layers. Writing the confidence of the j-th layer of i-th detector as Confi(j) , i=1,…,d, j=1,…,n. The confidence of the first k layers can be expressed as:

Then the pose estimator of first m layers can be defined as:

For this processing, x will be regarded as non-face if it had been thrown out at first m layer, and it needn’t pose estimate again. Unlike Rowley’s method, in our method, pose estimator is not separate from face detecting; therefore it will not introduce extra computation problem. Fig. 3 illustrates the pose estimation procedure we proposed.

Fig. 3.Pose estimation frame

In this study, there are four view-based detectors; cascade classifier contains 16 layers and first four layers are used to estimate the pose of face.

 

5. Experimental Results and Analysis

The multi-view face detection system we designed is running on the hardware environment of Intel (R) Core (TM) 2 (2.93GHz), a Web camera, and the software environment of Windows 7 and Visual Studio 2008.

Our face dataset include MIT-CBCL database and CMU face database, which contains frontal faces and profile faces.

5.1 Comparison of Convergence on Adaboost algorithms

A. Experiment for Convergence of Single Strong Classifier

To compare the convergence performance of original real Adaboost and improved real Adaboost algorithm in this paper, we design an experiment on single strong classifier; training set includes 3500 faces and 10000 non-faces, when the detection rate achieve 99.9%, training will be stopped. The coefficients of dynamic weights of improved real Adaboost algorithm are set as: Wbase = 1.5, Wfactpr = 0.5, Windex = 2.0 experimental result is shown as Fig. 4; the “Detection Rate” is the data of the training set.

The performance curve in Fig. 4 shows that, the convergence performance of improved real Adaboost algorithm is better than original real Adaboost algorithm. We can find that our real Adaboost has met the stopping criterion when it reached the 31 layer weak classifiers, however original need to 67 layers.

Fig. 4.Comparison of two Adaboost algorithms

B. Experiment for Cascade Classifier

Under the same training condition, we do experiments on Discrete Adaboost, original real Adaboost and ours for testing the performance of these three cascade classifiers. 3500 face pictures with resize of 20 × 20are selected for training set, and we use bootstrapping method to select 10000 non-face samples from 500 pictures, which are without human face. 127 face pictures (include 332 faces) are used for testing set. Each algorithm use the same 5000 Haar-like features, Table 2 shows us the detail of performance comparison.

Table 2.Performance Comparison of Three Adaboost Algorithms

We can find that from Table 2: our algorithm has better performance than Discrete Adaboost algorithm, and detection time is shorter than original real Adaboost algorithm by 8ms. The number of training layers is more than original real algorithm, because the stopping criterion is based on detection rate, each layer can reach high detection rate by using less weak classifiers; since the number of rejecting as non-face is few, more layers are necessary to reject all the non-faces. Moreover, the number of weak classifiers is less than original real Adaboost algorithm by 1042 in this paper.

5.2 Experiments of Multi-View Face Detection

A. Frontal Face Detection

We use the CMU frontal face database and MIT face database [26] which consists of a training set with 6977 images (2429 faces and 4548 non-faces) and a test set with 24045 images (472 faces and 23573 non-faces). The images are 19 × 19 grayscale and we renormalize them to 20 × 20. With these samples, we trained them by improved real Adaboost algorithm that has 16 layers and 742 weak classifiers. Compared to the Viola’s method that has 62 layers and 4297 features and Schneiderman’s method [18], our method is much more efficient. Fig. 5 is the ROC curve and Fig. 6 show us face detection results. Obviously, our method has better convergence performance on training set.

Fig. 5.The ROC curves of our detectors on the MIT frontal face test set

Fig. 6.Frontal face detection results on CMU+MIT test set

We tested the profile detector on CMU profile face test set with 208 images (441 faces and 307 of them are non-frontal face). For the training set, about 34000 normalized images (about 8000 half profile faces, 10000 full profile faces and 16000 frontal faces) are selected on Internet. According to the analysis in this paper, only 12 categories classifiers are trained for multi-view face detection with ROP and RIP angle. Table 3 and Fig. 7 show the experimental results of multi-view face detection.

Table 3.Multi-view face detection results on CMU profile face test set.

Fig. 7.Multi-view face detection results on CMU profile face test set.

In reference [18] reached 85.5% pass rate with 91 false drops, but we can find that the performance of our method is much better than Schneiderman’s method; it can reach a high detection rate with less false drop. On the other hand, the first layers of cascade classifiers are used to estimate the face pose, the results with PE is better than that without PE as Fig. 8 shown.

Fig. 8.The ROC curves of our algorithm on CMU profile face data set

For testing our method in real time, we design a multi-view face detection system based on VS 2008 and OpenCV library, and three persons with different poses, expression and other factors (glasses, hair blocking and so on) are selected for testing set. The detection results are shown as Fig. 9.

Fig. 9.Multi-view faces detection in real time

The size of each frame is set as 320×240 and the running time for detection is about 60ms with high correct detection rate. The performance of detection system is robust and effective.

 

6. Conclusion

In this paper, we present a novel algorithm for multi-view face detection. An improved real Adaboost algorithm for training Haar feature is proposed that used dynamic weights and pre-partitioning samples, and we prove its viability based on probability. Compared to original real Adaboost algorithm, the time complexity of weak classifier is o(M * N), and the time complexity of strong classifier is o(T * M * N), the training speed is enhanced by o(N).

Based on our processing of Haar features for multi-view face detection (mirror image and rotation), only 12 categories classifiers are needed to build for ROP and RIP angle changing, it greatly reduces the complexity of training. After that, LUT weak classifier is used to boost Haar features in order to use real Adaboost algorithm. Moreover, instead of training a pose estimator separately, we use the confidence of strong classifier, which defined on real Adaboost algorithm in this paper. 4 of 16 layers of cascade classifiers are used to estimate face pose, there is no extra computation for pose estimating.

We tested our algorithm on CMU and MIT face database. The experimental results showed that, the convergence performance of improved real Adaboost algorithm is better than original real Adaboost algorithm. In addition, our multi-view face detection system has a high detection rate within an acceptable number of false drops and satisfactory timeliness.

References

  1. Wenkai XU and Eung-Joo Lee, "A Combinational Algorithm for Multi-Face Recognition," International Journal of Advancements in Computing Technology, vol. 4, no. 13, pp. 146-154, 2012. http://www . aicit org/ij act/global/paper detai!.html?jname=IJACT&g=1045
  2. Yu-Bu Lee and Sukhan Lee, "Robust Face Detection Based on Knowledge-Directed Specification of Bottom-Up Saliency," ETRl Journal, vol. 33, no. 4, pp. 600-610, Aug. 2011. http://dx.doi.org/10.4218/etrij.11.1510.0123
  3. B. Moghaddam, A Penlland, "Face Recognition Using View-based And Modular Eigenspaces," in Proc. of Automatic Systems for the Identification and Inspection of Humans, July, 1994. http//dx.doi.org/10.1117/12.191877
  4. B. Moghaddam. A. Pentland. "Probabilistic visual learning for object representation:. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.19. no.7. pp. 137-143. 1997. http //dx.doi.org/10.1109/34.598227
  5. B. Moghaddam. W Wahid. A. Pentland. "Beyond Eigenfaces: probabilistic matching for face recognition.. in Proc. of IEEE International Conference on Automatic Face and Gesture Recognition. pp. 30-35. 1998. http//dx.doi.org/10.1109/AFGR1998.670921
  6. A. Pentland. B. Moghaddam. T. Starner. "View-based and modular eigenspaces for face recognition.. in Proc. of IEEE Conference on Computer Vision and Pattern Recognition. pp. 84-94. 1994. http //dx.doi.org/10.1 109/CVPR1994.323814
  7. E. Osuna. R Freund. F Girosi. "Support vector machines: training and applications:. Technical report. Massachusetts Institute of Technology. AI Memo 1602. 1997. http"//hdl.handle.net/1721.1/7290
  8. E. Osuna. R Freund. F. Girosi. "Training support vector machines: an application to face detection:. in Proc. of Computer Vision and Pattern Recognition. pp. 130-136. 1997. http //dx.doi.org/10.1109/CVPR1997.609310
  9. F. Soulie. F. Viennet and B. Lamy. "Multi-modular neural network architectures: applications in optical character and human face recognition," International Journal of Pattern Recognition and Artificial Intelligence, vol. 7. no. 4. pp. 721-755. 1993. http //dx.doi.org/10.1142/S0218001493000364
  10. K. Sung. T. Poggio. "Example-based learning for view-based human face detection:. Technical report Massachusetts Institute of Technology. AI MEMO 1521. 1994. http//dx.doi.org/10.1109/34.655648
  11. H. Row1ey. S. Ba1uja and T. Kanade. "Neural network-based face detection:. in Proc. of IEEE Conference on Computer Vision and Pattern Recognition. pp. 203 -207. 1997. http//dx.doi.org/10.1109/34.655647
  12. H. Row1ey. S. Ba1uja and T. Kanade. "Rotation invariant neural network-based face detection.. Technical report. CMU-CS-97-201. 1997. http//dx.doi.org/10.1 109/CVPR1998.698585
  13. H. Row1ey. S. Ba1uja and T. Kanade. "Neural network-based face detection:. IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 20. no. 1. pp. 23-38. 1998. http//dx.doi.orgl1O.1109/34.655647 https://doi.org/10.1109/34.655647
  14. Abraham Mathew and R Radhakrishnan. "Versatile Approach for Feature Extraction ofImage Using Haar-Like Filter:. KIlT Journal of Research & Education, vol. 2. Issue 2. pp. 18-22.2013. http//www.kiit. inlimages/stories/pdflKIIT Journal of Research Education-12-a March 2013. Pdf
  15. P Viola. M Jones. "Rapid Object Detection using a Boosted Cascade of Simple Features... in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition. pp. 511-518. 2001. http//dx.doi.org/10.1109/CVPR2001.990517
  16. S. Li. L. Zhu, Z. Zhang and H. Zhang. "Statistica11earning of multi-view face detection.. in Proc. of ECVC'02. pp. 67-81. 2002. http //dx.doi.org/10.1007/3-540-47979-15
  17. Z. Zhang. L. Zhu. S. Li. H. Zhang. "Real-time multi-view face detection:. in Proc. of IEEE International Conference on Automatic Face and Gesture Recognition. 02. pp. 149. 2002. http //dx.doi.org/10.1109/AFGR2002.1004147
  18. H. Schneiderman. T. Kanade. "A Statistical Method for 3D Object Detection Applied to Faces and Cars: in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition '00. pp. 546-751. 2000. http //dx.doi.org/10.1109/CVPR2000.855895
  19. Hongming Zhang. Wen Gao. Xilin Chen. Shiguang Shan and Debin Zhao. "Robust Multi-View Face Detection Using Error Correcting Output Codes:. in Proc. of ECCV' 2006. pp. 1-12.2006. http //dx.doi.org/10.100711 17440851
  20. R E. Schapire and Y Singer, "Improved Boosting Algorithms Using Confidence-rated Predictions," Machine Learning, vol. 37, Issue 3, pp. 297-336, 1999. http //dx.doi.org/10.1023/A: I007614523901
  21. Hendro Baskoro, Jun-Seong Kim and Chang-Su Kim, "Mean-Shift Object Tracking with Discrete and Real Adaboost Techmques," ETRl Journal, vol. 31, no. 3, pp. 282-291, June. 2011. http //dx.doi.org/10.4218/etrij.0901080372
  22. Bo Wu, Chang Huang and Haizhou Ai, "A Multi-view Face Detection Based on Real Adaboost Algorithm," Journal of Computer Research and Development, vol. 42, no. 9, pp.812-817, 2005. http //dx.doi.org/10.1360/crad20050924
  23. Fasel Iall, Fortenberry B. and Movellan J, "A generative framework for real time object detection and classification," Computer Vision and Image Understanding, vol. 98, Issue 1, pp. 182-210, 2005. http //dx.doi.org/10.1016/j cviu2004.07.014
  24. Bo Wu, Haizhou Ai and Chang Huang, "LUT-Based Adaboost for Gender Classifiaction," in Proc. of AVBPA '03, pp.104-110, 2003. http //dx.doi.org/10.1007/3-540-44887-X 13
  25. R Feraud, 0.1 Bernier, Jean-Emmanuel Viallet and Michel Collobert, "A Fast and accurate face detector based on neural networks." IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 23, no. 1, pp. 42-53, 2001. http //dx.doi.org/10.1109/34.899945
  26. B. Weyrauch, J Huang, B. Heisele, and V Blanz. "Component-based Face Recognition with 3D Morphable Models," in Proc. of First IEEE Workshop on Face Processing in Video, 2004. http //dx.doi.org/10.1109/CVPR2004.41

Cited by

  1. On road vehicle detection by learning hard samples and filtering false alarms from shadow features vol.30, pp.6, 2013, https://doi.org/10.1007/s12206-016-0539-1
  2. 손동작 식별 규칙을 이용한 컴퓨터의 프레젠테이션 제어 vol.22, pp.9, 2013, https://doi.org/10.6109/jkiice.2018.22.9.1172
  3. Novel Ensemble Landslide Predictive Models Based on the Hyperpipes Algorithm: A Case Study in the Nam Dam Commune, Vietnam vol.10, pp.11, 2013, https://doi.org/10.3390/app10113710
  4. GIS-based evaluation of landslide susceptibility using hybrid computational intelligence models vol.195, pp.None, 2013, https://doi.org/10.1016/j.catena.2020.104777