• Title/Summary/Keyword: weighted mixture model

Search Result 32, Processing Time 0.036 seconds

A Speaker Pruning Method for Real-Time Speaker Identification System

  • Kim, Min-Joung;Suk, Soo-Young;Jeong, Jong-Hyeog
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.10 no.2
    • /
    • pp.65-71
    • /
    • 2015
  • It has been known that GMM (Gaussian Mixture Model) based speaker identification systems using ML (Maximum Likelihood) and WMR (Weighting Model Rank) demonstrate very high performances. However, such systems are not so effective under practical environments, in terms of real time processing, because of their high calculation costs. In this paper, we propose a new speaker-pruning algorithm that effectively reduces the calculation cost. In this algorithm, we select 20% of speaker models having higher likelihood with a part of input speech and apply MWMR (Modified Weighted Model Rank) to these selected speaker models to find out identified speaker. To verify the effectiveness of the proposed algorithm, we performed speaker identification experiments using TIMIT database. The proposed method shows more than 60% improvement of reduced processing time than the conventional GMM based system with no pruning, while maintaining the recognition accuracy.

Finding Cost-Effective Mixtures Robust to Noise Variables in Mixture-Process Experiments

  • Lim, Yong B.
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.2
    • /
    • pp.161-168
    • /
    • 2014
  • In mixture experiments with process variables, we consider the case that some of process variables are either uncontrollable or hard to control, which are called noise variables. Given the such mixture experimental data with process variables, first we study how to search for candidate models. Good candidate models are screened by the sequential variables selection method and checking the residual plots for the validity of the model assumption. Two methods, which use numerical optimization methods proposed by Derringer and Suich (1980) and minimization of the weighted expected loss, are proposed to find a cost-effective robust optimal condition in which the performance of the mean as well as the variance of the response for each of the candidate models is well-behaved under the cost restriction of the mixture. The proposed methods are illustrated with the well known fish patties texture example described by Cornell (2002).

Analytical solution to the conduction-dominated solidification of a binary mixture (열전도에 의해 지배되는 이성분혼합물의 응고문제에 대한 해석해)

  • Jeong, Jae-Dong;Yu, Ho-Seon;No, Seung-Tak;Lee, Jun-Sik
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.20 no.11
    • /
    • pp.3655-3665
    • /
    • 1996
  • An analytical solution is presented for the conduction-dominated solidification of a binary mixture in a semi-infinite medium. The present approach differs from that of other solution by these four characteristics. (1) Solid fraction is determined from the phase diagram, (2) thermophysical properties in mushy zone are weighted according to the local solid fraction, (3) non-equilibrium solidification can be simulated and (4) the cooling condition of under-eutectic temperature can be simulated. Up to now, almost all analyses are based on the assumption of constant properties in mushy zone and solid fraction linearly with temperature or length. The validation for these assumptions, however, shows that serious error is found except some special cases. The influence of microscopic model on the macroscopic temperature profile is very small and can be ignored. But the solid fraction and average solid concentration which directly influence the quality of materials are drastically changed by the microscopic models. An approximate solution using the method of weighted residuals is also introduced and shows good agreement with the analytical solution. All calculations are performed for NH$_{4}$Cl-H$_{2}$O and Al-Cu system.

Speech Enhancement Based on Mixture Hidden Filter Model (HFM) Under Nonstationary Noise (혼합 은닉필터모델 (HFM)을 이용한 비정상 잡음에 오염된 음성신호의 향상)

  • 강상기;백성준;이기용;성굉모
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.387-393
    • /
    • 2002
  • The enhancement technique of noise signal using mixture HFM (Midden Filter Model) are proposed. Given the parameters of the clean signal and noise, noisy signal is modeled by a linear state-space model with Markov switching parameters. Estimation of state vector is required for estimating original signal. The estimation procedure is based on mixture interacting multiple model (MIMM) and the estimator of speech is given by the weighted sum of parallel Kalman filters operating interactively. Simulation results showed that the proposed method offers performance gains relative to the previous results with slightly increased complexity.

Semi-Supervised Recursive Learning of Discriminative Mixture Models for Time-Series Classification

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.186-199
    • /
    • 2013
  • We pose pattern classification as a density estimation problem where we consider mixtures of generative models under partially labeled data setups. Unlike traditional approaches that estimate density everywhere in data space, we focus on the density along the decision boundary that can yield more discriminative models with superior classification performance. We extend our earlier work on the recursive estimation method for discriminative mixture models to semi-supervised learning setups where some of the data points lack class labels. Our model exploits the mixture structure in the functional gradient framework: it searches for the base mixture component model in a greedy fashion, maximizing the conditional class likelihoods for the labeled data and at the same time minimizing the uncertainty of class label prediction for unlabeled data points. The objective can be effectively imposed as individual mixture component learning on weighted data, hence our mixture learning typically becomes highly efficient for popular base generative models like Gaussians or hidden Markov models. Moreover, apart from the expectation-maximization algorithm, the proposed recursive estimation has several advantages including the lack of need for a pre-determined mixture order and robustness to the choice of initial parameters. We demonstrate the benefits of the proposed approach on a comprehensive set of evaluations consisting of diverse time-series classification problems in semi-supervised scenarios.

Language Model Adaptation Based on Topic Probability of Latent Dirichlet Allocation

  • Jeon, Hyung-Bae;Lee, Soo-Young
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.487-493
    • /
    • 2016
  • Two new methods are proposed for an unsupervised adaptation of a language model (LM) with a single sentence for automatic transcription tasks. At the training phase, training documents are clustered by a method known as Latent Dirichlet allocation (LDA), and then a domain-specific LM is trained for each cluster. At the test phase, an adapted LM is presented as a linear mixture of the now trained domain-specific LMs. Unlike previous adaptation methods, the proposed methods fully utilize a trained LDA model for the estimation of weight values, which are then to be assigned to the now trained domain-specific LMs; therefore, the clustering and weight-estimation algorithms of the trained LDA model are reliable. For the continuous speech recognition benchmark tests, the proposed methods outperform other unsupervised LM adaptation methods based on latent semantic analysis, non-negative matrix factorization, and LDA with n-gram counting.

A New Distance Measure for a Variable-Sized Acoustic Model Based on MDL Technique

  • Cho, Hoon-Young;Kim, Sang-Hun
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.795-800
    • /
    • 2010
  • Embedding a large vocabulary speech recognition system in mobile devices requires a reduced acoustic model obtained by eliminating redundant model parameters. In conventional optimization methods based on the minimum description length (MDL) criterion, a binary Gaussian tree is built at each state of a hidden Markov model by iteratively finding and merging similar mixture components. An optimal subset of the tree nodes is then selected to generate a downsized acoustic model. To obtain a better binary Gaussian tree by improving the process of finding the most similar Gaussian components, this paper proposes a new distance measure that exploits the difference in likelihood values for cases before and after two components are combined. The mixture weight of Gaussian components is also introduced in the component merging step. Experimental results show that the proposed method outperforms MDL-based optimization using either a Kullback-Leibler (KL) divergence or weighted KL divergence measure. The proposed method could also reduce the acoustic model size by 50% with less than a 1.5% increase in error rate compared to a baseline system.

L1-norm Regularization for State Vector Adaptation of Subspace Gaussian Mixture Model (L1-norm regularization을 통한 SGMM의 state vector 적응)

  • Goo, Jahyun;Kim, Younggwan;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.131-138
    • /
    • 2015
  • In this paper, we propose L1-norm regularization for state vector adaptation of subspace Gaussian mixture model (SGMM). When you design a speaker adaptation system with GMM-HMM acoustic model, MAP is the most typical technique to be considered. However, in MAP adaptation procedure, large number of parameters should be updated simultaneously. We can adopt sparse adaptation such as L1-norm regularization or sparse MAP to cope with that, but the performance of sparse adaptation is not good as MAP adaptation. However, SGMM does not suffer a lot from sparse adaptation as GMM-HMM because each Gaussian mean vector in SGMM is defined as a weighted sum of basis vectors, which is much robust to the fluctuation of parameters. Since there are only a few adaptation techniques appropriate for SGMM, our proposed method could be powerful especially when the number of adaptation data is limited. Experimental results show that error reduction rate of the proposed method is better than the result of MAP adaptation of SGMM, even with small adaptation data.

Optimization of Gaussian Mixture in CDHMM Training for Improved Speech Recognition

  • Lee, Seo-Gu;Kim, Sung-Gil;Kang, Sun-Mee;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.7-21
    • /
    • 1999
  • This paper proposes an improved training procedure in speech recognition based on the continuous density of the Hidden Markov Model (CDHMM). Of the three parameters (initial state distribution probability, state transition probability, output probability density function (p.d.f.) of state) governing the CDHMM model, we focus on the third parameter and propose an efficient algorithm that determines the p.d.f. of each state. It is known that the resulting CDHMM model converges to a local maximum point of parameter estimation via the iterative Expectation Maximization procedure. Specifically, we propose two independent algorithms that can be embedded in the segmental K -means training procedure by replacing relevant key steps; the adaptation of the number of mixture Gaussian p.d.f. and the initialization using the CDHMM parameters previously estimated. The proposed adaptation algorithm searches for the optimal number of mixture Gaussian humps to ensure that the p.d.f. is consistently re-estimated, enabling the model to converge toward the global maximum point. By applying an appropriate threshold value, which measures the amount of collective changes of weighted variances, the optimized number of mixture Gaussian branch is determined. The initialization algorithm essentially exploits the CDHMM parameters previously estimated and uses them as the basis for the current initial segmentation subroutine. It captures the trend of previous training history whereas the uniform segmentation decimates it. The recognition performance of the proposed adaptation procedures along with the suggested initialization is verified to be always better than that of existing training procedure using fixed number of mixture Gaussian p.d.f.

  • PDF

Weighted zero-inflated Poisson mixed model with an application to Medicaid utilization data

  • Lee, Sang Mee;Karrison, Theodore;Nocon, Robert S.;Huang, Elbert
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.2
    • /
    • pp.173-184
    • /
    • 2018
  • In medical or public health research, it is common to encounter clustered or longitudinal count data that exhibit excess zeros. For example, health care utilization data often have a multi-modal distribution with excess zeroes as well as a multilevel structure where patients are nested within physicians and hospitals. To analyze this type of data, zero-inflated count models with mixed effects have been developed where a count response variable is assumed to be distributed as a mixture of a Poisson or negative binomial and a distribution with a point mass of zeros that include random effects. However, no study has considered a situation where data are also censored due to the finite nature of the observation period or follow-up. In this paper, we present a weighted version of zero-inflated Poisson model with random effects accounting for variable individual follow-up times. We suggested two different types of weight function. The performance of the proposed model is evaluated and compared to a standard zero-inflated mixed model through simulation studies. This approach is then applied to Medicaid data analysis.