Advanced SearchSearch Tips
Choosing the Tuning Constant by Laplace Approximation
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Choosing the Tuning Constant by Laplace Approximation
Ahn, Sung-Mahn; Kwon, Suhn-Beom;
  PDF(new window)
Evidence framework enables us to determine the tuning constant in a penalized likelihood formula. We apply the framework to the estimating parameters of normal mixtures. Evidence, which is a solely data-dependent measure, can be evaluated by Laplace approximation. According to a synthetic data simulation, we found that the proper values of the tuning constant can be systematically obtained.
Penalized likelihood;tuning constant;evidence;Laplace approximation;
 Cited by
Ahn, S. and Baik, S. W. (2011). Estimating parameters in multivariate normal mixtures, Korean Communications in Statistics, 18, 357-366. crossref(new window)

Chickering, D. M. and Heckerman, D. (1997). Efficient approximations for the marginal likelihood of Bayesian networks with hidden variables, Machine Learning, 29, 181-212. crossref(new window)

Eggermont, P. P. B. and LaRiccia, V. N. (2001). Maximum Penalized Likelihood Estimation, Springer.

Good, I. J. (1971). A nonparametric roughness penalty for probability densities, Nature, 229, 29-30.

Good, I. J. and Gaskins, R. A. (1971). Nonparametric roughness penalties for probability densities, Biometrika, 58, 255-277. crossref(new window)

MacKay, D. J. C. (1992). Bayesian interpolation, Neural Computation, 4, 415-447. crossref(new window)

Marron, J. S. and Wand, M. P. (1992). Exact mean integrated squared error, Annals of Statistics, 20, 712-736. crossref(new window)

Raftery, A. (1995). Bayesian model selection in social research. In Marsden, P. (Ed.), Sociological Methodology, Blackwells, Cambridge, MA.

Xu, L. and Jordan, M. I. (1996). On convergence properties of the EM algorithm for Gaussian mixtures, Neural Computation, 8, 129-151. crossref(new window)