Advanced SearchSearch Tips
Double-Bagging Ensemble Using WAVE
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Double-Bagging Ensemble Using WAVE
Kim, Ahhyoun; Kim, Minji; Kim, Hyunjoong;
  PDF(new window)
A classification ensemble method aggregates different classifiers obtained from training data to classify new data points. Voting algorithms are typical tools to summarize the outputs of each classifier in an ensemble. WAVE, proposed by Kim et al. (2011), is a new weight-adjusted voting algorithm for ensembles of classifiers with an optimal weight vector. In this study, when constructing an ensemble, we applied the WAVE algorithm on the double-bagging method (Hothorn and Lausen, 2003) to observe if any significant improvement can be achieved on performance. The results showed that double-bagging using WAVE algorithm performs better than other ensemble methods that employ plurality voting. In addition, double-bagging with WAVE algorithm is comparable with the random forest ensemble method when the ensemble size is large.
Ensemble;double-bagging;voting;classification;discriminant analysis;cross-validation;
 Cited by
Asuncion, A. and Newman, D. J. (2007). UCI machine learning repository, University of California, Irvine, School of Information and Computer Sciences,

Bauer, E. and Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bag-ging, boosting, and variants, Machine Learning, 36, 105-139. crossref(new window)

Breiman, L. (1996a). Bagging predictors, Machine Learning, 24, 123-140.

Breiman, L. (1996b). Out-of-bag estimation, Technical Report, Statistics Department, University of California Berkeley, Berkeley, California 94708, breiman/ OOBes-timation.pdf.

Breiman, L. (2001). Random forests, Machine Learning, 45, 5-32. crossref(new window)

Breiman, L., Friedman, J. H., Olshen, R. A. and Stone, C. J. (1984). Classification and Regression Trees, Chapman and Hall, New York.

Dietterich, T. (2000). Ensemble Methods in Machine Learning, Springer, Berlin.

Efron, B. and Tibshirani, R. (1986). Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy, Statistical Science, 1, 54-75. crossref(new window)

Freund, Y. and Schapire, R. (1996). Experiments with a new boosting algorithm, In Proceedings of the Thirteenth International Conference on Machine Learning, 96, 148-156.

Heinz, G., Peterson, L. J., Johnson, R. W. and Kerk, C. J. (2003). Exploring relationships in body dimensions, Journal of Statistics Education, 11,

Ho, T. K., Hull, J. J. and Srihari, S. N. (1994). Decision combination in multiple classifier systems, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 832-844.

Hothorn, T. and Lausen, B. (2003). Double-bagging: Combining classifiers by bootstrap aggregation, Pattern Recognition, 36, 1303-1309. crossref(new window)

Kim, H. and Loh, W. Y. (2001). Classification trees with unbiased multiway splits, Journal of American Statistical Association, 96, 589-604. crossref(new window)

Kim, H. and Loh, W. Y. (2003). Classification trees with bivariate linear discriminant node models, Journal of Computational and Graphical Statistics, 12, 512-530. crossref(new window)

Kim, H., Kim, H., Moon, H. and Ahn, H. (2011). A weight-adjusted voting algorithm for ensembles of classifiers, Journal of the Korean Statistical Society, 40, 437-449. crossref(new window)

Liew, A. and Wiener, M. (2002). Classification and regression by random forest, R News, 2, 18-22.

Loh, W. Y. (2009). Improving the precision of classification trees, The Annals of Applied Statistics, 3, 1710-1737. crossref(new window)

Opitz, D. and Maclin, R. (1999). Popular ensemble methods: An empirical study, Journal of Artificial Intelligence Research, 11, 169-198.

Oza, N. C. and Tumer, K. (2008). Classifier ensembles: Select real-world applications, Information Fusion, 9, 4-20. crossref(new window)

Skurichina, M. and Duin, R. P. (1998). Bagging for linear classifiers, Pattern Recognition, 31, 909-930. crossref(new window)

Statlib (2010). Datasets archive, Carnegie Mellon University, Department of Statistics,

Terhune, J. M. (1994). Geographical variation of harp seal underwater vocalisations, Canadian Journal of Zoology, 72, 892-897. crossref(new window)

Therneau, T. and Atkinson, E. (1997). An introduction to recursive partitioning using the RPART routines, Mayo Foundation, Rochester, New York.

Tumer, K. and Oza, N. C. (2003). Input decimated ensembles, Pattern Analysis and Applications, 6, 65-77. crossref(new window)

Zhu, J., Zou, H., Rosset, S. and Hastie, T. (2009). Multi-class AdaBoost, Statistics and Its Interface, 2, 349-360. crossref(new window)