DOI QR코드

DOI QR Code

Comparison of ensemble pruning methods using Lasso-bagging and WAVE-bagging

분류 앙상블 모형에서 Lasso-bagging과 WAVE-bagging 가지치기 방법의 성능비교

  • Kwak, Seungwoo (Department of Applied Statistics, Yonsei University) ;
  • Kim, Hyunjoong (Department of Applied Statistics, Yonsei University)
  • 곽승우 (연세대학교 응용통계학과) ;
  • 김현중 (연세대학교 응용통계학과)
  • Received : 2014.08.15
  • Accepted : 2014.10.17
  • Published : 2014.11.30

Abstract

Classification ensemble technique is a method to combine diverse classifiers to enhance the accuracy of the classification. It is known that an ensemble method is successful when the classifiers that participate in the ensemble are accurate and diverse. However, it is common that an ensemble includes less accurate and similar classifiers as well as accurate and diverse ones. Ensemble pruning method is developed to construct an ensemble of classifiers by choosing accurate and diverse classifiers only. In this article, we proposed an ensemble pruning method called WAVE-bagging. We also compared the results of WAVE-bagging with that of the existing pruning method called Lasso-bagging. We showed that WAVE-bagging method performed better than Lasso-bagging by the extensive empirical comparison using 26 real dataset.

분류 앙상블 모형이란 여러 분류기들의 예측 결과를 통합하여 더욱 정교한 예측성능을 가진 분류기를 만들기 위한 융합방법론이라 할 수 있다. 분류 앙상블을 구성하는 분류기들이 높은 예측 정확도를 가지고 있으면서 서로 상이한 모형으로 이루어져 있을 때 분류 앙상블 모형의 정확도가 높다고 알려져 있다. 하지만, 실제 분류 앙상블 모형에는 예측 정확도가 그다지 높지 않으며 서로 유사한 분류기도 포함되어 있기 마련이다. 따라서 분류 앙상블 모형을 구성하고 있는 여러 분류기들 중에서 서로 상이하면서도 정확도가 높은 것만을 선택하여 앙상블 모형을 구성해 보는 가지치기 방법을 생각할 수 있다. 본 연구에서는 Lasso 회귀분석 방법을 이용하여 분류기 중에 일부를 선택하여 모형을 만드는 방법과 가중 투표 앙상블 방법론의 하나인 WAVE-bagging을 이용하여 분류기 중 일부를 선택하는 앙상블 가지치기 방법을 비교하였다. 26개 자료에 대해 실험을 한 결과 WAVE-bagging 방법을 이용한 분류 앙상블 가지치기 방법이 Lasso-bagging을 이용한 방법보다 더 우수함을 보였다.

Keywords

Acknowledgement

Supported by : 한국연구재단

References

  1. Asuncion, A. and Newman, D. J. (2007). UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences, http://archive.ics.uci.edu/ml.
  2. Breiman, L., Friedman, J., Olshen, R. and Stone, C. (1984). Classification and regression trees, Chapman and Hall, New York.
  3. Breiman, L. (1996). Bagging predictors. Machine Learning, 24, 123-140.
  4. Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. https://doi.org/10.1023/A:1010933404324
  5. Chen, K. and Jin, Y. (2010). An ensemble learning algorithm based on lasso selection. IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS), 1, 617-620.
  6. Dietterich, T. G. (2000). Ensemble methods in machine learning, Springer, Berlin.
  7. Freund, Y. and Schapire, R. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55, 119-139. https://doi.org/10.1006/jcss.1997.1504
  8. Heinz, G., Peterson, L. J., Johnson, R. W. and Kerk, C. J. (2003). Exploring relationships in body dimensions. Journal of Statistics Education, 11, http://www.amstat.org/publications/jse/v11n2/datasets.heinz.html.
  9. Kim, A., Kim, J. and Kim, H. (2012). The guideline for choosing the right-size of tree for boosting algorithm. Journal of the Korean Data & Information Science Society, 23, 949-959. https://doi.org/10.7465/jkdi.2012.23.5.949
  10. Kim, H., Kim, H., Moon, H. and Ahn, H. (2011). A weight-adjusted voting algorithm for ensemble of classifiers. Journal of the Korean Statistical Society, 40, 437-449. https://doi.org/10.1016/j.jkss.2011.03.002
  11. Kuncheva, L. (2004). Combining pattern classifiers: Methods and algorithms, Wiley, New Jersey.
  12. Kuncheva, L. (2005). Diversity in multiple classifier systems. Information Fusion, 6, 3-4. https://doi.org/10.1016/j.inffus.2004.04.009
  13. Loh, W.-Y. (2009). Improving the precision of classification trees. The Annals of Applied Statistics, 3, 1710-1737.
  14. Rokach, L. (2009). Taxonomy for characterizing ensemble methods in classification tasks: A review and annotated bibliography. Computational Statistics and Data Analysis, 53, 4046-4072. https://doi.org/10.1016/j.csda.2009.07.017
  15. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society B, 58, 267-288.
  16. Zhou, Z. H. and Tang, W. (2003). Selective ensemble of decision trees. Lecture Notes in Computer Science, 2639, 476-483.

Cited by

  1. Tree size determination for classification ensemble vol.27, pp.1, 2016, https://doi.org/10.7465/jkdi.2016.27.1.255