DOI QR코드

DOI QR Code

Variable Selection in Normal Mixture Model Based Clustering under Heteroscedasticity

이분산 상황 하에서 정규혼합모형 기반 군집분석의 변수선택

Kim, Seung-Gu
김승구

  • Received : 20110900
  • Accepted : 20110900
  • Published : 2011.12.31

Abstract

In high dimensionality where the number of variables are excessively larger than observations, it is required to remove the noninformative variables to cluster observations. Most model-based approaches for variable selection have been considered under the assumption of homoscedasticity and their models are mainly estimated by a penalized likelihood method. In this paper, a different approach is proposed to remove the noninformative variables effectively and to cluster based on the modified normal mixture model simultaneously. The validity of the model was provided and an EM algorithm was derived to estimate the parameters. Simulation studies and an experiment using real microarray dataset showed the effectiveness of the proposed method.

Keywords

Informative variables;variable selection;clustering;EM algorithm;microarray gene expression

References

  1. Golub, T. R., Slonim, D. K., Tamayo, P., Huard, C., Gaasenbeek, M., Mesirov, J. P., Coller, H., Loh, M. L., Downing, J. R., Caligiuri, M. A. and Bloomfield, C. D. (1999). Molecular classification of cancer: Class discovery andclass prediction by gene expression monitoring, Science, 286, 531-537. https://doi.org/10.1126/science.286.5439.531
  2. Kim, S.-G. (2006). Use of factor analyzer normal mixture model with mean pattern modeling on clustering genes, Communications Korean Statistical Society, 13, 113-123. (Korean with English abstract) https://doi.org/10.5351/CKSS.2006.13.1.113
  3. McLachlan, G. J., Bean, R. W. and Jones, B.-T. (2006). A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays, Bioinformatics, 22, 1608-1615. https://doi.org/10.1093/bioinformatics/btl148
  4. McLachlan, G. J. and Peel, D. (2000). Finite Mixture Models, John Wiley & Sons.
  5. Meng, X.-L. and Rubin, D. (1993). Maximum likelihood estimation via the ECM algorithm: A general framework, Biometrika, 80, 267-278. https://doi.org/10.1093/biomet/80.2.267
  6. Ng, S. K., McLachlan, G. J., Wang, K., Ben-Tovim, L. and Ng, S. W. (2006). A Mixture model with randomeffects components for clustering correlated gene-expression profiles, Bioinformatics, 22, 1745-1752. https://doi.org/10.1093/bioinformatics/btl165
  7. Pan, W. and Shen, X. (2006). Penalized model-based clustering with application to variable selection, Journal of Machine Learning Research, 8, 1145-1164.
  8. Raftery, A. E. and Dean, N. (2006). Variable selection for model-based clustering, Journal of the American Statistical Association, 101, 168-178. https://doi.org/10.1198/016214506000000113
  9. Schwarz, G. (1978). Estimating the dimension of a model, Annals of Statistics, 6, 461-464. https://doi.org/10.1214/aos/1176344136
  10. Wang, S. and Zhu, J. (2008). Variable selection for model-based high-dimensional clustering and its application to microarray data, Bioinformatics, 64, 440-448.
  11. Xie, B., Pan, W. and Shen, X. (2008). Variable selection in penalized model-based clustering via regularization on grouped parameters, Biometrics, 64, 921-930. https://doi.org/10.1111/j.1541-0420.2007.00955.x

Cited by

  1. A Variable Selection Procedure for K-Means Clustering vol.25, pp.3, 2012, https://doi.org/10.5351/KJAS.2012.25.3.471

Acknowledgement

Supported by : 상지대학교