• Title/Summary/Keyword: most a posteriori predictive probability

Search Result 2, Processing Time 0.018 seconds

Prediction of extreme rainfall with a generalized extreme value distribution (일반화 극단 분포를 이용한 강우량 예측)

  • Sung, Yong Kyu;Sohn, Joong K.
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.4
    • /
    • pp.857-865
    • /
    • 2013
  • Extreme rainfall causes heavy losses in human life and properties. Hence many works have been done to predict extreme rainfall by using extreme value distributions. In this study, we use a generalized extreme value distribution to derive the posterior predictive density with hierarchical Bayesian approach based on the data of Seoul area from 1973 to 2010. It becomes clear that the probability of the extreme rainfall is increasing for last 20 years in Seoul area and the model proposed works relatively well for both point prediction and predictive interval approach.

A Minimum-Error-Rate Training Algorithm for Pattern Classifiers and Its Application to the Predictive Neural Network Models (패턴분류기를 위한 최소오차율 학습알고리즘과 예측신경회로망모델에의 적용)

  • 나경민;임재열;안수길
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.12
    • /
    • pp.108-115
    • /
    • 1994
  • Most pattern classifiers have been designed based on the ML (Maximum Likelihood) training algorithm which is simple and relatively powerful. The ML training is an efficient algorithm to individually estimate the model parameters of each class under the assumption that all class models in a classifier are statistically independent. That assumption, however, is not valid in many real situations, which degrades the performance of the classifier. In this paper, we propose a minimum-error-rate training algorithm based on the MAP (Maximum a Posteriori) approach. The algorithm regards the normalized outputs of the classifier as estimates of the a posteriori probability, and tries to maximize those estimates. According to Bayes decision theory, the proposed algorithm satisfies the condition of minimum-error-rate classificatin. We apply this algorithm to NPM (Neural Prediction Model) for speech recognition, and derive new disrminative training algorithms. Experimental results on ten Korean digits recognition have shown the reduction of 37.5% of the number of recognition errors.

  • PDF