Double-Bagging Ensemble Using WAVE

Kim, Ahhyoun;Kim, Minji;Kim, Hyunjoong

  • Received : 2014.06.08
  • Accepted : 2014.07.29
  • Published : 2014.09.30


A classification ensemble method aggregates different classifiers obtained from training data to classify new data points. Voting algorithms are typical tools to summarize the outputs of each classifier in an ensemble. WAVE, proposed by Kim et al. (2011), is a new weight-adjusted voting algorithm for ensembles of classifiers with an optimal weight vector. In this study, when constructing an ensemble, we applied the WAVE algorithm on the double-bagging method (Hothorn and Lausen, 2003) to observe if any significant improvement can be achieved on performance. The results showed that double-bagging using WAVE algorithm performs better than other ensemble methods that employ plurality voting. In addition, double-bagging with WAVE algorithm is comparable with the random forest ensemble method when the ensemble size is large.


Ensemble;double-bagging;voting;classification;discriminant analysis;cross-validation


  1. Bauer, E. and Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bag-ging, boosting, and variants, Machine Learning, 36, 105-139.
  2. Asuncion, A. and Newman, D. J. (2007). UCI machine learning repository, University of California, Irvine, School of Information and Computer Sciences,
  3. Breiman, L. (1996a). Bagging predictors, Machine Learning, 24, 123-140.
  4. Breiman, L. (1996b). Out-of-bag estimation, Technical Report, Statistics Department, University of California Berkeley, Berkeley, California 94708, breiman/ OOBes-timation.pdf.
  5. Breiman, L. (2001). Random forests, Machine Learning, 45, 5-32.
  6. Breiman, L., Friedman, J. H., Olshen, R. A. and Stone, C. J. (1984). Classification and Regression Trees, Chapman and Hall, New York.
  7. Dietterich, T. (2000). Ensemble Methods in Machine Learning, Springer, Berlin.
  8. Kim, H., Kim, H., Moon, H. and Ahn, H. (2011). A weight-adjusted voting algorithm for ensembles of classifiers, Journal of the Korean Statistical Society, 40, 437-449.
  9. Efron, B. and Tibshirani, R. (1986). Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy, Statistical Science, 1, 54-75.
  10. Freund, Y. and Schapire, R. (1996). Experiments with a new boosting algorithm, In Proceedings of the Thirteenth International Conference on Machine Learning, 96, 148-156.
  11. Heinz, G., Peterson, L. J., Johnson, R. W. and Kerk, C. J. (2003). Exploring relationships in body dimensions, Journal of Statistics Education, 11,
  12. Ho, T. K., Hull, J. J. and Srihari, S. N. (1994). Decision combination in multiple classifier systems, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 832-844.
  13. Hothorn, T. and Lausen, B. (2003). Double-bagging: Combining classifiers by bootstrap aggregation, Pattern Recognition, 36, 1303-1309.
  14. Kim, H. and Loh, W. Y. (2001). Classification trees with unbiased multiway splits, Journal of American Statistical Association, 96, 589-604.
  15. Kim, H. and Loh, W. Y. (2003). Classification trees with bivariate linear discriminant node models, Journal of Computational and Graphical Statistics, 12, 512-530.
  16. Liew, A. and Wiener, M. (2002). Classification and regression by random forest, R News, 2, 18-22.
  17. Loh, W. Y. (2009). Improving the precision of classification trees, The Annals of Applied Statistics, 3, 1710-1737.
  18. Opitz, D. and Maclin, R. (1999). Popular ensemble methods: An empirical study, Journal of Artificial Intelligence Research, 11, 169-198.
  19. Oza, N. C. and Tumer, K. (2008). Classifier ensembles: Select real-world applications, Information Fusion, 9, 4-20.
  20. Therneau, T. and Atkinson, E. (1997). An introduction to recursive partitioning using the RPART routines, Mayo Foundation, Rochester, New York.
  21. Skurichina, M. and Duin, R. P. (1998). Bagging for linear classifiers, Pattern Recognition, 31, 909-930.
  22. Statlib (2010). Datasets archive, Carnegie Mellon University, Department of Statistics,
  23. Terhune, J. M. (1994). Geographical variation of harp seal underwater vocalisations, Canadian Journal of Zoology, 72, 892-897.
  24. Tumer, K. and Oza, N. C. (2003). Input decimated ensembles, Pattern Analysis and Applications, 6, 65-77.
  25. Zhu, J., Zou, H., Rosset, S. and Hastie, T. (2009). Multi-class AdaBoost, Statistics and Its Interface, 2, 349-360.


Supported by : Yonsei University