• Title/Summary/Keyword: maximum likelihood criterion

Search Result 89, Processing Time 0.028 seconds

Likelihood Ratio Criterion for Testing Sphericity from a Multivariate Normal Sample with 2-step Monotone Missing Data Pattern

  • Choi, Byung-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.2
    • /
    • pp.473-481
    • /
    • 2005
  • The testing problem for sphericity structure of the covariance matrix in a multivariate normal distribution is introduced when there is a sample with 2-step monotone missing data pattern. The maximum likelihood method is described to estimate the parameters on the basis of the sample. Using these estimates, the likelihood ratio criterion for testing sphericity is derived.

Avoiding Indefiniteness in Criteria for Maximum Likelihood Bearing Estimation with Arbitrary Array Configuration

  • Suzuki, Masakiyo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1807-1810
    • /
    • 2002
  • This paper presents a technique for avoid- ing indefiniteness in Maximum Likelihood (ML) criteria for Direction-of-Arrival (DOA) finding using a sensor ar- ray with arbitrary configuration. The ML criterion has singular points in the solution space where the criterion becomes indefinite. Solutions fly iterative techniques for ML bearing estimation may oscillate because of numerical instability which occurs due to the indefiniteness, when bearings more than one approach to the identical value. The oscillation makes the condition for terminating iterations complex. This paper proposes a technique for avoiding the indefiniteness in ML criteria.

  • PDF

Time-Delay Estimation in the Multi-Path Channel based on Maximum Likelihood Criterion

  • Xie, Shengdong;Hu, Aiqun;Huang, Yi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.4
    • /
    • pp.1063-1075
    • /
    • 2012
  • To locate an object accurately in the wireless sensor networks, the distance measure based on time-delay plays an important role. In this paper, we propose a maximum likelihood (ML) time-delay estimation algorithm in multi-path wireless propagation channel. We get the joint probability density function after sampling the frequency domain response of the multi-path channel, which could be obtained by the vector network analyzer. Based on the ML criterion, the time-delay values of different paths are estimated. Considering the ML function is non-linear with respect to the multi-path time-delays, we first obtain the coarse values of different paths using the subspace fitting algorithm, then take them as an initial point, and finally get the ML time-delay estimation values with the pattern searching optimization method. The simulation results show that although the ML estimation variance could not reach the Cramer-Rao lower bounds (CRLB), its performance is superior to that of subspace fitting algorithm, and could be seen as a fine algorithm.

Classical and Bayesian methods of estimation for power Lindley distribution with application to waiting time data

  • Sharma, Vikas Kumar;Singh, Sanjay Kumar;Singh, Umesh
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.3
    • /
    • pp.193-209
    • /
    • 2017
  • The power Lindley distribution with some of its properties is considered in this article. Maximum likelihood, least squares, maximum product spacings, and Bayes estimators are proposed to estimate all the unknown parameters of the power Lindley distribution. Lindley's approximation and Markov chain Monte Carlo techniques are utilized for Bayesian calculations since posterior distribution cannot be reduced to standard distribution. The performances of the proposed estimators are compared based on simulated samples. The waiting times of research articles to be accepted in statistical journals are fitted to the power Lindley distribution with other competing distributions. Chi-square statistic, Kolmogorov-Smirnov statistic, Akaike information criterion and Bayesian information criterion are used to access goodness-of-fit. It was found that the power Lindley distribution gives a better fit for the data than other distributions.

Application of the Weibull-Poisson long-term survival model

  • Vigas, Valdemiro Piedade;Mazucheli, Josmar;Louzada, Francisco
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.4
    • /
    • pp.325-337
    • /
    • 2017
  • In this paper, we proposed a new long-term lifetime distribution with four parameters inserted in a risk competitive scenario with decreasing, increasing and unimodal hazard rate functions, namely the Weibull-Poisson long-term distribution. This new distribution arises from a scenario of competitive latent risk, in which the lifetime associated to the particular risk is not observable, and where only the minimum lifetime value among all risks is noticed in a long-term context. However, it can also be used in any other situation as long as it fits the data well. The Weibull-Poisson long-term distribution is presented as a particular case for the new exponential-Poisson long-term distribution and Weibull long-term distribution. The properties of the proposed distribution were discussed, including its probability density, survival and hazard functions and explicit algebraic formulas for its order statistics. Assuming censored data, we considered the maximum likelihood approach for parameter estimation. For different parameter settings, sample sizes, and censoring percentages various simulation studies were performed to study the mean square error of the maximum likelihood estimative, and compare the performance of the model proposed with the particular cases. The selection criteria Akaike information criterion, Bayesian information criterion, and likelihood ratio test were used for the model selection. The relevance of the approach was illustrated on two real datasets of where the new model was compared with its particular cases observing its potential and competitiveness.

Bootstrap Confidence Intervals for a One Parameter Model using Multinomial Sampling

  • Jeong, Hyeong-Chul;Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.2
    • /
    • pp.465-472
    • /
    • 1999
  • We considered a bootstrap method for constructing confidenc intervals for a one parameter model using multinomial sampling. The convergence rates or the proposed bootstrap method are calculated for model-based maximum likelihood estimators(MLE) using multinomial sampling. Monte Carlo simulation was used to compare the performance of bootstrap methods with normal approximations in terms of the average coverage probability criterion.

  • PDF

Inversion of Geophysical Data with Robust Estimation (로버스트추정에 의한 지구물리자료의 역산)

  • Kim, Hee Joon
    • Economic and Environmental Geology
    • /
    • v.28 no.4
    • /
    • pp.433-438
    • /
    • 1995
  • The most popular minimization method is based on the least-squares criterion, which uses the $L_2$ norm to quantify the misfit between observed and synthetic data. The solution of the least-squares problem is the maximum likelihood point of a probability density containing data with Gaussian uncertainties. The distribution of errors in the geophysical data is, however, seldom Gaussian. Using the $L_2$ norm, large and sparsely distributed errors adversely affect the solution, and the estimated model parameters may even be completely unphysical. On the other hand, the least-absolute-deviation optimization, which is based on the $L_1$ norm, has much more robust statistical properties in the presence of noise. The solution of the $L_1$ problem is the maximum likelihood point of a probability density containing data with longer-tailed errors than the Gaussian distribution. Thus, the $L_1$ norm gives more reliable estimates when a small number of large errors contaminate the data. The effect of outliers is further reduced by M-fitting method with Cauchy error criterion, which can be performed by iteratively reweighted least-squares method.

  • PDF

The Efficiency of Conditional MLE for Pure Birth Processes

  • Yoon, Jong-Ook;Kim, Joo-Hwan
    • Proceedings of the Korean Reliability Society Conference
    • /
    • 2002.06a
    • /
    • pp.367-386
    • /
    • 2002
  • The Present paper is devoted to a study of the performance, in large samples, of a conditional maximum likelihood estimator(CMLE) for the parameter ${\lambda}$ in a pure birth processes(PBP). To conduct the conditional inference for the PBP, we drove the likelihood function of time-inhomogeneous Poisson processes. The limiting distributions of CMLE under the likelihoods $L_{t}$ or $\overline{L_{t}}$ are investigated. We found that the CMLE is asymptotically efficient with respect to the both $L_{t}$ or $\overline{L_{t}}$ under the efficiency criterion of Weiss & Wolfowitz(1974).

  • PDF

MCE Training Algorithm for a Speech Recognizer Detecting Mispronunciation of a Foreign Language (외국어 발음오류 검출 음성인식기를 위한 MCE 학습 알고리즘)

  • Bae, Min-Young;Chung, Yong-Joo;Kwon, Chul-Hong
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.43-52
    • /
    • 2004
  • Model parameters in HMM based speech recognition systems are normally estimated using Maximum Likelihood Estimation(MLE). The MLE method is based mainly on the principle of statistical data fitting in terms of increasing the HMM likelihood. The optimality of this training criterion is conditioned on the availability of infinite amount of training data and the correct choice of model. However, in practice, neither of these conditions is satisfied. In this paper, we propose a training algorithm, MCE(Minimum Classification Error), to improve the performance of a speech recognizer detecting mispronunciation of a foreign language. During the conventional MLE(Maximum Likelihood Estimation) training, the model parameters are adjusted to increase the likelihood of the word strings corresponding to the training utterances without taking account of the probability of other possible word strings. In contrast to MLE, the MCE training scheme takes account of possible competing word hypotheses and tries to reduce the probability of incorrect hypotheses. The discriminant training method using MCE shows better recognition results than the MLE method does.

  • PDF