Advanced SearchSearch Tips
BCDR algorithm for network estimation based on pseudo-likelihood with parallelization using GPU
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
BCDR algorithm for network estimation based on pseudo-likelihood with parallelization using GPU
Kim, Byungsoo; Yu, Donghyeon;
  PDF(new window)
Graphical model represents conditional dependencies between variables as a graph with nodes and edges. It is widely used in various fields including physics, economics, and biology to describe complex association. Conditional dependencies can be estimated from a inverse covariance matrix, where zero off-diagonal elements denote conditional independence of corresponding variables. This paper proposes a efficient BCDR (block coordinate descent with random permutation) algorithm using graphics processing units and random permutation for the CONCORD (convex correlation selection method) based on the BCD (block coordinate descent) algorithm, which estimates a inverse covariance matrix based on pseudo-likelihood. We conduct numerical studies for two network structures to demonstrate the efficiency of the proposed algorithm for the CONCORD in terms of computation times.
BCD algorithm;graphical model;graphics processing unit;pseudo-likelihood;random permutation;
 Cited by
Banerjee, O., Ghaoui, L. E. and d'Aspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. Journal of Machine Learning Research, 9, 485-516.

Barabasi, A. and Albert, R. (1999). Emergence of scaling in random networks. Science, 286, 509-512. crossref(new window)

Boyd, S. and Vandenberghe, L. (2004). Convex optimization, Cambridge University Press, New York.

Cai, T., Liu, W. D. and Luo, X. (2011). A constrained ${\ell}_1$ minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association, 106, 594-607. crossref(new window)

Candes, E. J. and Tao, T. (2007). The Dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics, 35, 2313-2351. crossref(new window)

Candes, E. J. and Plan, Y. (2011). A probabilistic and RIPless theory of compressed sensing. Information Theory, IEEE Transactions, 57, 7235-7254. crossref(new window)

Dong, H., Luo, L., Hong, S., Siu, H., Xiao, Y., Jin, L., Chen, R. and Xiong, M. (2010). Integrated analysis of mutations, miRNA and mRNA expression in glioblastoma. BMC Systems Biology, 4, 1-20. crossref(new window)

Drton, M. and Perlman, M. D. (2004). Model selection for Gaussian concentration graphs. Biometrika, 91, 591-602. crossref(new window)

Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9, 432-441. crossref(new window)

Fu, W. (1998). Penalized regressions: The bridge vs the lasso. Journal of Computational and Graphical Statistics, 7, 397-416.

Khare, K., Oh, S.-Y. and Rajaratnam, B. (2015). A convex pseudolikelihood framework for high dimensional partial correlation estimation with convergence guarantees. Journal of the Royal Statistical Society B, 77, 803-825. crossref(new window)

Kwon, S., Han, S. and Lee, S. (2013). A small review and further studies on the LASSO. Journal of the Korean Data & Information Science Society, 24, 1077-1088. crossref(new window)

Lauritzen, S. (1996). Graphical Models. Oxford Unversity Press Inc., New York.

Meinshausen, N. and Buhlmann, P. (2006). High-dimensional graph and variable selection with the lasso. Annals of Statistics, 34, 1436-1462. crossref(new window)

Nesterov, Y. (2012). Efficiency of coordinate descent methods on huge-scale optimizatioin problems. SIAM Journal on Optimization, 22, 341-362. crossref(new window)

Pang, H., Liu, H. and Vanderbei, R. (2014). The FASTCLIME package for linear programming and largescale precision matrix estimation in R. Journal of Machine Learning Research, 15, 489-493.

Peng, J., Wang, P., Zhou, N. and Zhu, J. (2009). Partial correlation estimation by Joint sparse regression models. Journal of the American Statistical Association, 104, 735-746. crossref(new window)

Shalev-Shwartz, S. and Tewari, A. (2011). Stochastic Methods for ℓ1-regularized loss minimization. Journal of Machine Learning Research, 12, 1865-1892.

Tang, H., Xiao, G., Behrens, C., Schiller, J., Allen, J., Chow, C. W., Suraokar, M., Corvalan, A., Mao, J., White, M. A., Wistuba, I. I., Minna, J. D. and Xie, Y. (2013). A 12-gene set predicts survival benefits from adjuvant chemotherapy in non-small cell lung cancer patients. Clinical Cancer Research. 19, 1577-1586. crossref(new window)

Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society B, 58, 267-288.

Vandenberghe, L., Boyd, S. and Wu, S. P. (1998). Determinant maximization with linear matrix inequality constraints. SIAM Journal on Matrix Analysis and Applications, 19, 499-533. crossref(new window)

Witten, D., Friedman, J. and Simon, N. (2011). New insights and faster computations for the graphical lasso. Journal of Computational and Graphical Statistics, 20, 892-900. crossref(new window)

Yu, D. and Lim, J. (2013). Introduction to general purpose GPU computing. Journal of the Korean Data & Information Science Society, 24, 1043-1061. crossref(new window)

Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model. Biometrika, 94, 19-35. crossref(new window)