JOURNAL BROWSE
Search
Advanced SearchSearch Tips
Classification of High Dimensionality Data through Feature Selection Using Markov Blanket
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Classification of High Dimensionality Data through Feature Selection Using Markov Blanket
Lee, Junghye; Jun, Chi-Hyuck;
  PDF(new window)
 Abstract
A classification task requires an exponentially growing amount of computation time and number of observations as the variable dimensionality increases. Thus, reducing the dimensionality of the data is essential when the number of observations is limited. Often, dimensionality reduction or feature selection leads to better classification performance than using the whole number of features. In this paper, we study the possibility of utilizing the Markov blanket discovery algorithm as a new feature selection method. The Markov blanket of a target variable is the minimal variable set for explaining the target variable on the basis of conditional independence of all the variables to be connected in a Bayesian network. We apply several Markov blanket discovery algorithms to some high-dimensional categorical and continuous data sets, and compare their classification performance with other feature selection methods using well-known classifiers.
 Keywords
Feature Selection;Classification;High Dimensionality Data;Markov Blanket;
 Language
English
 Cited by
1.
Using k-dependence causal forest to mine the most significant dependency relationships among clinical variables for thyroid disease diagnosis, PLOS ONE, 2017, 12, 8, e0182070  crossref(new windwow)
 References
1.
Aliferis, C. F., Tsamardinos, I., and Statnikov, A. (2003a), Hiton: a novel Markov blanket algorithm for optimal variable selection, American Medical Informatics Association Annual Symposium Proceedings, 21-25.

2.
Aliferis, C. F., Tsamardinos, I., Statnikov, A., and Brown, L. E. (2003b), Causal explorer: a causal probabilistic network learning toolkit for biomedical discovery, METMBS Conference, 3, 371-376.

3.
Ding, C. (2002), Analysis of gene expression profiles: class discovery and leaf ordering, Proceedings of the 6th Annual International Conference on Computational Biology, 127-136.

4.
Ding, C. and Peng, H. (2005), Minimum redundancy feature selection from microarray gene expression data, Journal of Bioinformatics and Computational Biology, 3(2), 185-205. crossref(new window)

5.
Fix, E. and Hodges, J. L. (1989), Discriminatory analysis-nonparametric discrimination: consistency properties, International Statistical Review, 57(3), 238-247. crossref(new window)

6.
Fu, S. and Desmarais, M. C. (2008), Tradeoff analysis of different Markov blanket local learning approaches, in: Washio, T. Suzuki, E., Ting, K. M., Inokuchi, A. (Eds.), Advances in Knowledge Discovery and Data Mining, Springer, Osaka, 562-571.

7.
Fu, S. and Desmarais, M. C. (2010), Markov blanket based feature selection: a review of past decade, Proceedings of the World Congress on Engineering, 1, 321-328.

8.
Fukunaga, K. (1990), Introduction to Statistical Pattern Recognition, second ed. Academic Press, San Diego.

9.
Guyon, I. and Elisseeff, A. (2003), An introduction to variable and feature selection, Journal of Machine Learning Research, 3, 1157-1182.

10.
Guyon, I., Aliferis, C., Cooper, G., Eisseeff, A., Pellet, J. P., Spirtes, P., and Statnikov, A. (2011), Causality workbench, in: Illari, P.M., Russo, F., Williamson, J. (Eds.), Causality in the Sciences. Oxford University Press, Oxford.

11.
Hall, M. A. (1999), Correlation-based feature selection for machine learning, Unpublished doctoral dissertation, University of Waikato, Hamilton, New Zealand.

12.
Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., and Witten, I. H. (2009), The WEKA data mining software: an update, ACM SIGKDD Explorations Newsletter, 11(1), 10-18. crossref(new window)

13.
Koller, D. and Sahami, M. (1996), Toward optimal feature selection, Proceedings of 13th International Conference on Machine Learning, 45(2), 211-232.

14.
Li, J. and Liu, H. (2002), Kent ridge bio-medical data set repository, Institute for Infocomm Research, http://sdmc.lit.org.sg/GEDatasets/Datasets.html.

15.
McDonald, J. H. (2009), Handbook of Biological Statistics, second ed. Sparky House Publishing, Baltimore.

16.
Pearl, J. (1988), Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, second ed., Morgan Kaufmann Publishers, Inc., San Francisco.

17.
Pena, J. M., Nilsson, R., Bjorkegren, J., and Tegner, J., (2007), Towards scalable and data efficient learning of Markov boundaries, International Journal of Approximate Reasoning, 45(2), 211-232. crossref(new window)

18.
Saeys, Y., Inza, I., and Larranaga, P. (2005), A review of feature selection techniques in Bioinformatics, Bioinformatics, 23(19), 2507-2517.

19.
Tsamardinos, I., Aliferis, C. F., and Statnikov, A. (2003), Algorithms for large scale Markov blanket discovery, American Association for Artificial Intelligence, 376-381.

20.
Tsamardinos, I., Aliferis, C. F., and Statnikov, A. (2003b), Time and sample efficient discovery of Markov blankets and direct causal relations, Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 673-678.

21.
Van Harmelen, F., Lifschitz, V., and Porter, B. (2008), Handbook of Knowledge Representation, first ed. Elsevier, Amsterdam.

22.
Vapnik, V. and Cortes, C. (1995), Support-vector networks, Machine Learning, 20(3), 273-297.

23.
Zeng, Y., Luo, J., and Lin, S. (2009), Classification using Markov blanket for feature selection, IEEE International Conference on Granular Computing, 743-747.

24.
Zhang, H. (2004), The optimality of naive Bayes, Proceedings of the 17th International FLAIRS Conference, 1, 3-9.