Asuncion, A. and Newman, D. J. (2007). UCI Machine Learning Repository [http://www.ics.uci.edu/mlearn/MLRepository.html]. Irvine, CA: University of California, School of Information and Computer Science.
Breiman, L. (1998). Arcing classifiers (with discussion), Annals of Statistics, 26, 801-849.
Breiman, L., Friedman, J., Olshen, R. and Stone, C. (1984), Classification and Regression Trees, Chapman & Hall, New York.
Freund, Y. and Schapire, R. E. (1997). A decision-theoretic generalization of online learning and application to boosting, Journal of Computer and System Science, 55, 119-139.
Hastie, T., Tibshirani, R. and Friedman, J. (2001). The Elements of Statistical Learning, Springer-Verlag, New York.
Kohavi, R. (1996). Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid, Proceedings of the second International Conference on Knowledge Discovery and Data Mining, 202-207.
Kuncheva, L. I. (2004). Classification ensemble for changing environments, Proceedings of 5th International Workshop on Multiple Classifier Systems, 1-15.
Quinlan, J. R. (1993). C4.5: Prigrams for Machine Learning, Morgan Kaufmann, San Maeto, CA.
Rudin, C., Daubechies, I. and Schapire, R. E. (2004). The dynamics of AdaBoost: cyclic behavior and convergence of margins, Journal of Machine Learning Research, 5, 1557-1595.
Street, W. N. and Kim, Y. S. (2001). A streaming ensemble algorithm (SEA) for large scale classification, Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 377-382.
Wang, H., Fan, W., Yu, P. S. and Han, J. (2003). Mining concept drifting data streams using ensemble classifiers, Proceedings of then 9th ACM SIGKDD International Conference on Knowledge discovery and Data Mining, 226-235.
Yeon, K., Choi, H., Yoon, Y. J. and Song, M. S. (2005). Model based ensemble learning for tracking concept drift, Proceedings of 55th Session of the International Statistical Institute.