• Title, Summary, Keyword: Markov

Search Result 2,214, Processing Time 0.059 seconds

Uncertainty Assessment of Single Event Rainfall-Runoff Model Using Bayesian Model (Bayesian 모형을 이용한 단일사상 강우-유출 모형의 불확실성 분석)

  • Kwon, Hyun-Han;Kim, Jang-Gyeong;Lee, Jong-Seok;Na, Bong-Kil
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.5
    • /
    • pp.505-516
    • /
    • 2012
  • The study applies a hydrologic simulation model, HEC-1 developed by Hydrologic Engineering Center to Daecheong dam watershed for modeling hourly inflows of Daecheong dam. Although the HEC-1 model provides an automatic optimization technique for some of the parameters, the built-in optimization model is not sufficient in estimating reliable parameters. In particular, the optimization model often fails to estimate the parameters when a large number of parameters exist. In this regard, a main objective of this study is to develop Bayesian Markov Chain Monte Carlo simulation based HEC-1 model (BHEC-1). The Clark IUH method for transformation of precipitation excess to runoff and the soil conservation service runoff curve method for abstractions were used in Bayesian Monte Carlo simulation. Simulations of runoff at the Daecheong station in the HEC-1 model under Bayesian optimization scheme allow the posterior probability distributions of the hydrograph thus providing uncertainties in rainfall-runoff process. The proposed model showed a powerful performance in terms of estimating model parameters and deriving full uncertainties so that the model can be applied to various hydrologic problems such as frequency curve derivation, dam risk analysis and climate change study.

Genetic Variability Comparison of Wild Populations and Cultured Stocks of Flounder Paralichthys olivaceus Based on Microsatellite DNA Markers (넙치, Paralichthys olivaceus 자연 집단과 양식 집단의 유전학적 다양성 비교)

  • Jeong, Dal Sang;Noh, Jae Koo;Myeong, Jeong In;Lee, Jeong Ho;Kim, Hyun Choul;Park, Chul Ji;Min, Byung Hwa;Ha, Dong Soo;Jeon, Chang Young
    • Korean Journal of Ichthyology
    • /
    • v.21 no.4
    • /
    • pp.221-226
    • /
    • 2009
  • Six microsatellite DNA markers were used to investigate the genetic variability between wild populations and cultured stocks of olive flounder Paralichthys olivaceus. The average of observed (Ho) and expected heterozygosity (He) ranged from 0.722 to 0.959, and from 0.735 to 0.937, respectively. There was no distinguishable difference between the wild populations and cultured stocks in terms of the observed and expected heterozygosities. However, number of alleles per locus differed markedly between the two fish groups: 19.7 to 21.8 for the wild populations and 12.0 to 14.7 for the cultured stocks. This result gives important information concerning the production of seedling for the improvement of genetic diversity in this species.

Economic Evaluation and Budget Impact Analysis of the Surveillance Program for Hepatocellular Carcinoma in Thai Chronic Hepatitis B Patients

  • Sangmala, Pannapa;Chaikledkaew, Usa;Tanwandee, Tawesak;Pongchareonsuk, Petcharat
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.20
    • /
    • pp.8993-9004
    • /
    • 2014
  • Background: The incidence rate and the treatment costs of hepatocellular carcinoma (HCC) are high, especially in Thailand. Previous studies indicated that early detection by a surveillance program could help by down-staging. This study aimed to compare the costs and health outcomes associated with the introduction of a HCC surveillance program with no program and to estimate the budget impact if the HCC surveillance program were implemented. Materials and Methods: A cost utility analysis using a decision tree and Markov models was used to compare costs and outcomes during the lifetime period based on a societal perspective between alternative HCC surveillance strategies with no program. Costs included direct medical, direct non-medical, and indirect costs. Health outcomes were measured as life years (LYs), and quality adjusted life years (QALYs). The results were presented in terms of the incremental cost-effectiveness ratio (ICER) in Thai THB per QALY gained. One-way and probabilistic sensitivity analyses were applied to investigate parameter uncertainties. Budget impact analysis (BIA) was performed based on the governmental perspective. Results: Semi-annual ultrasonography (US) and semi-annual ultrasonography plus alpha-fetoprotein (US plus AFP) as the first screening for HCC surveillance would be cost-effective options at the willingness to pay (WTP) threshold of 160,000 THB per QALY gained compared with no surveillance program (ICER=118,796 and ICER=123,451 THB/QALY), respectively. The semi-annual US plus AFP yielded more net monetary benefit, but caused a substantially higher budget (237 to 502 million THB) than semi-annual US (81 to 201 million THB) during the next ten fiscal years. Conclusions: Our results suggested that a semi-annual US program should be used as the first screening for HCC surveillance and included in the benefit package of Thai health insurance schemes for both chronic hepatitis B males and females aged between 40-50 years. In addition, policy makers considered the program could be feasible, but additional evidence is needed to support the whole prevention system before the implementation of a strategic plan.

Analysis of an M/G/1/K Queueing System with Queue-Length Dependent Service and Arrival Rates (시스템 내 고객 수에 따라 서비스율과 도착율을 조절하는 M/G/1/K 대기행렬의 분석)

  • Choi, Doo-Il;Lim, Dae-Eun
    • Journal of the Korea Society for Simulation
    • /
    • v.24 no.3
    • /
    • pp.27-35
    • /
    • 2015
  • We analyze an M/G/1/K queueing system with queue-length dependent service and arrival rates. There are a single server and a buffer with finite capacity K including a customer in service. The customers are served by a first-come-first-service basis. We put two thresholds $L_1$ and $L_2$($${\geq_-}L_1$$ ) on the buffer. If the queue length at the service initiation epoch is less than the threshold $L_1$, the service time of customers follows $S_1$ with a mean of ${\mu}_1$ and the arrival of customers follows a Poisson process with a rate of ${\lambda}_1$. When the queue length at the service initiation epoch is equal to or greater than $L_1$ and less than $L_2$, the service time is changed to $S_2$ with a mean of $${\mu}_2{\geq_-}{\mu}_1$$. The arrival rate is still ${\lambda}_1$. Finally, if the queue length at the service initiation epoch is greater than $L_2$, the arrival rate of customers are also changed to a value of $${\lambda}_2({\leq_-}{\lambda}_1)$$ and the mean of the service times is ${\mu}_2$. By using the embedded Markov chain method, we derive queue length distribution at departure epochs. We also obtain the queue length distribution at an arbitrary time by the supplementary variable method. Finally, performance measures such as loss probability and mean waiting time are presented.

CRNN-Based Korean Phoneme Recognition Model with CTC Algorithm (CTC를 적용한 CRNN 기반 한국어 음소인식 모델 연구)

  • Hong, Yoonseok;Ki, Kyungseo;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • For Korean phoneme recognition, Hidden Markov-Gaussian Mixture model(HMM-GMM) or hybrid models which combine artificial neural network with HMM have been mainly used. However, current approach has limitations in that such models require force-aligned corpus training data that is manually annotated by experts. Recently, researchers used neural network based phoneme recognition model which combines recurrent neural network(RNN)-based structure with connectionist temporal classification(CTC) algorithm to overcome the problem of obtaining manually annotated training data. Yet, in terms of implementation, these RNN-based models have another difficulty in that the amount of data gets larger as the structure gets more sophisticated. This problem of large data size is particularly problematic in the Korean language, which lacks refined corpora. In this study, we introduce CTC algorithm that does not require force-alignment to create a Korean phoneme recognition model. Specifically, the phoneme recognition model is based on convolutional neural network(CNN) which requires relatively small amount of data and can be trained faster when compared to RNN based models. We present the results from two different experiments and a resulting best performing phoneme recognition model which distinguishes 49 Korean phonemes. The best performing phoneme recognition model combines CNN with 3hop Bidirectional LSTM with the final Phoneme Error Rate(PER) at 3.26. The PER is a considerable improvement compared to existing Korean phoneme recognition models that report PER ranging from 10 to 12.

Relationship between Characteristics of Accounting Firms and Audit Engagement Risks based on Bayesian Network (베이지안 네트워크를 기반으로 한 회계법인의 속성과 감사계약체결위험간의 관계)

  • Sun, Eun-Jung;Park, Sung-Jin
    • Management & Information Systems Review
    • /
    • v.36 no.1
    • /
    • pp.1-19
    • /
    • 2017
  • One of the methods of securing the reliability of accounting information is maintaining high audit quality. The first step of improving audit quality is lowering audit engagement risks. Thus, this study analyzed the relationship between the characteristics of accounting firms and audit engagement risks based on the Bayesian Network. For this, Markov Blanket, the minimum explanatory variable set, which affects audit engagement risks, was presented, and based on the drawn causal relationship, sensitivity analysis was conducted to verify the characteristics of accounting firms, which affect audit engagement risks. The existing preceding research that used multiple regression analysis presumes the linearity between explanatory variables and dependent variables, so there was a limit in drawing the relationship between explanatory variables. Therefore, this study figured out the interdependence between variables using the General Bayesian Network and examined the impact that each variable has finally on audit engagement risks that affects the audit quality. The results of this study would greatly contribute to improving the efficiency of the supervisory task by allowing a supervisory institution to identify an accounting firms that does not manage audit engagement risks properly and to improve the supervision of the accounting firms in advance. In addition, this study will be used as a reference when a supervisory institution would improve the system related to audit quality by presenting the characteristics of accounting firms related to the audit quality.

  • PDF

Bayesian ordinal probit semiparametric regression models: KNHANES 2016 data analysis of the relationship between smoking behavior and coffee intake (베이지안 순서형 프로빗 준모수 회귀 모형 : 국민건강영양조사 2016 자료를 통한 흡연양태와 커피섭취 간의 관계 분석)

  • Lee, Dasom;Lee, Eunji;Jo, Seogil;Choi, Taeryeon
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.1
    • /
    • pp.25-46
    • /
    • 2020
  • This paper presents ordinal probit semiparametric regression models using Bayesian Spectral Analysis Regression (BSAR) method. Ordinal probit regression is a way of modeling ordinal responses - usually more than two categories - by connecting the probability of falling into each category explained by a combination of available covariates using a probit (an inverse function of normal cumulative distribution function) link. The Bayesian probit model facilitates posterior sampling by bringing a latent variable following normal distribution, therefore, the responses are categorized by the cut-off points according to values of latent variables. In this paper, we extend the latent variable approach to a semiparametric model for the Bayesian ordinal probit regression with nonparametric functions using a spectral representation of Gaussian processes based BSAR method. The latent variable is decomposed into a parametric component and a nonparametric component with or without a shape constraint for modeling ordinal responses and predicting outcomes more flexibly. We illustrate the proposed methods with simulation studies in comparison with existing methods and real data analysis applied to a Korean National Health and Nutrition Examination Survey (KNHANES) 2016 for investigating nonparametric relationship between smoking behavior and coffee intake.

Sequential Bayesian Updating Module of Input Parameter Distributions for More Reliable Probabilistic Safety Assessment of HLW Radioactive Repository (고준위 방사성 폐기물 처분장 확률론적 안전성평가 신뢰도 제고를 위한 입력 파라미터 연속 베이지안 업데이팅 모듈 개발)

  • Lee, Youn-Myoung;Cho, Dong-Keun
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.18 no.2
    • /
    • pp.179-194
    • /
    • 2020
  • A Bayesian approach was introduced to improve the belief of prior distributions of input parameters for the probabilistic safety assessment of radioactive waste repository. A GoldSim-based module was developed using the Markov chain Monte Carlo algorithm and implemented through GSTSPA (GoldSim Total System Performance Assessment), a GoldSim template for generic/site-specific safety assessment of the radioactive repository system. In this study, sequential Bayesian updating of prior distributions was comprehensively explained and used as a basis to conduct a reliable safety assessment of the repository. The prior distribution to three sequential posterior distributions for several selected parameters associated with nuclide transport in the fractured rock medium was updated with assumed likelihood functions. The process was demonstrated through a probabilistic safety assessment of the conceptual repository for illustrative purposes. Through this study, it was shown that insufficient observed data could enhance the belief of prior distributions for input parameter values commonly available, which are usually uncertain. This is particularly applicable for nuclide behavior in and around the repository system, which typically exhibited a long time span and wide modeling domain.

Visualization of Korean Speech Based on the Distance of Acoustic Features (음성특징의 거리에 기반한 한국어 발음의 시각화)

  • Pok, Gou-Chol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.3
    • /
    • pp.197-205
    • /
    • 2020
  • Korean language has the characteristics that the pronunciation of phoneme units such as vowels and consonants are fixed and the pronunciation associated with a notation does not change, so that foreign learners can approach rather easily Korean language. However, when one pronounces words, phrases, or sentences, the pronunciation changes in a manner of a wide variation and complexity at the boundaries of syllables, and the association of notation and pronunciation does not hold any more. Consequently, it is very difficult for foreign learners to study Korean standard pronunciations. Despite these difficulties, it is believed that systematic analysis of pronunciation errors for Korean words is possible according to the advantageous observations that the relationship between Korean notations and pronunciations can be described as a set of firm rules without exceptions unlike other languages including English. In this paper, we propose a visualization framework which shows the differences between standard pronunciations and erratic ones as quantitative measures on the computer screen. Previous researches only show color representation and 3D graphics of speech properties, or an animated view of changing shapes of lips and mouth cavity. Moreover, the features used in the analysis are only point data such as the average of a speech range. In this study, we propose a method which can directly use the time-series data instead of using summary or distorted data. This was realized by using the deep learning-based technique which combines Self-organizing map, variational autoencoder model, and Markov model, and we achieved a superior performance enhancement compared to the method using the point-based data.

On correlation and causality in the analysis of big data (빅 데이터 분석에서 상관성과 인과성)

  • Kim, Joonsung
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.8
    • /
    • pp.845-852
    • /
    • 2018
  • Mayer-Schönberger and Cukier(2013) explain why big data is important for our life, while showing many cases in which analysis of big data has great significance for our life and raising intriguing issues on the analysis of big data. The two authors claim that correlation is in many ways practically far more efficient and versatile in the analysis of big data than causality. Moreover, they claim that causality could be abandoned since analysis and prediction founded on correlation must prevail. I critically examine the two authors' accounts of causality and correlation. First, I criticize that corelation is sufficient for our analysis of data and our prediction founded on the analysis. I point out their misunderstanding of the distinction between correlation and causality. I show that spurious correlation misleads our decision while analyzing Simpson paradox. Second, I criticize not only that causality is more inefficient in the analysis of big data than correlation, but also that there is no mathematical theory for causality. I introduce the mathematical theories of causality founded on structural equation theory, and show that causality has great significance for the analysis of big data.