• Title, Summary, Keyword: Markov

Search Result 2,214, Processing Time 0.053 seconds

Prediction of Mobile Phone Menu Selection with Markov Chains (Markov Chain을 이용한 핸드폰 메뉴 선택 예측)

  • Lee, Suk Won;Myung, Rohae
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.33 no.4
    • /
    • pp.402-409
    • /
    • 2007
  • Markov Chains has proven to be effective in predicting human behaviors in the areas of web site assess, multimedia educational system, and driving environment. In order to extend an application area of predicting human behaviors using Markov Chains, this study was conducted to investigate whether Markov Chains could be used to predict human behavior in selecting mobile phone menu item. Compared to the aforementioned application areas, this study has different aspects in using Markov Chains : m-order 1-step Markov Model and the concept of Power Law of Learning. The results showed that human behaviors in predicting mobile phone menu selection were well fitted into with m-order 1-step Markov Model and Power Law of Learning in allocating history path vector weights. In other words, prediction of mobile phone menu selection with Markov Chains was capable of user's actual menu selection.

Average run length calculation of the EWMA control chart using the first passage time of the Markov process (Markov 과정의 최초통과시간을 이용한 지수가중 이동평균 관리도의 평균런길이의 계산)

  • Park, Changsoon
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.1-12
    • /
    • 2017
  • Many stochastic processes satisfy the Markov property exactly or at least approximately. An interested property in the Markov process is the first passage time. Since the sequential analysis by Wald, the approximation of the first passage time has been studied extensively. The Statistical computing technique due to the development of high-speed computers made it possible to calculate the values of the properties close to the true ones. This article introduces an exponentially weighted moving average (EWMA) control chart as an example of the Markov process, and studied how to calculate the average run length with problematic issues that should be cautioned for correct calculation. The results derived for approximation of the first passage time in this research can be applied to any of the Markov processes. Especially the approximation of the continuous time Markov process to the discrete time Markov chain is useful for the studies of the properties of the stochastic process and makes computational approaches easy.

A Markov Chain Representation of Statistical Process Monitoring Procedure under an ARIMA(0,1,1) Model (ARIMA(0,1,1)모형에서 통계적 공정탐색절차의 MARKOV연쇄 표현)

  • 박창순
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.1
    • /
    • pp.71-85
    • /
    • 2003
  • In the economic design of the process control procedure, where quality is measured at certain time intervals, its properties are difficult to derive due to the discreteness of the measurement intervals. In this paper a Markov chain representation of the process monitoring procedure is developed and used to derive its properties when the process follows an ARIMA(0,1,1) model, which is designed to describe the effect of the noise and the special cause in the process cycle. The properties of the Markov chain depend on the transition matrix, which is determined by the control procedure and the process distribution. The derived representation of the Markov chain can be adapted to most different types of control procedures and different kinds of process distributions by obtaining the corresponding transition matrix.

Implementation of Markov Chain: Review and New Application (관리도에서 Markov연쇄의 적용: 복습 및 새로운 응용)

  • Park, Chang-Soon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.4
    • /
    • pp.657-676
    • /
    • 2011
  • Properties of statistical process control procedures may not be derived analytically in many cases; however, the application of a Markov chain can solve such problems. This article shows how to derive the properties of the process control procedures using the generated Markov chains when the control statistic satisfies the Markov property. Markov chain approaches that appear in the literature (such as the statistical design and economic design of the control chart as well as the variable sampling rate design) are reviewed along with the introduction of research results for application to a new control procedure and reset chart. The joint application of a Markov chain approach and analytical solutions (when available) can guarantee the correct derivation of the properties. A Markov chain approach is recommended over simulation studies due to its precise derivation of properties and short calculation times.

ON STATIONARY GAUSSIAN SECOND ORDER MARKOV PROCESSES

  • Park, W.J.;Hsu, Y.S.
    • Kyungpook Mathematical Journal
    • /
    • v.19 no.2
    • /
    • pp.249-255
    • /
    • 1979
  • In this paper we give a characterization of Stationary Gaussian 2nd order Markov processes in terms of its covariance function $R({\tau})=E[X(t)X(t+{\tau})]$ and also give some relationship among quasi-Markov, Markov and 2nd order Markov processes.

  • PDF

Equivalent Transformations of Undiscounted Nonhomogeneous Markov Decision Processes

  • Park, Yun-Sun
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.17 no.2
    • /
    • pp.131-144
    • /
    • 1992
  • Even though nonhomogeneous Markov Decision Processes subsume homogeneous Markov Decision Processes and are more practical in the real world, there are many results for them. In this paper we address the nonhomogeneous Markov Decision Process with objective to maximize average reward. By extending works of Ross [17] in the homogeneous case adopting the result of Bean and Smith [3] for the dicounted deterministic problem, we first transform the original problem into the discounted nonhomogeneous Markov Decision Process. Then, secondly, we transform into the discounted deterministic problem. This approach not only shows the interrelationships between various problems but also attacks the solution method of the undiscounted nohomogeneous Markov Decision Process.

  • PDF

Development of Daily Rainfall Simulation Model Based on Homogeneous Hidden Markov Chain (동질성 Hidden Markov Chain 모형을 이용한 일강수량 모의기법 개발)

  • Kwon, Hyun-Han;Kim, Tae Jeong;Hwang, Seok-Hwan;Kim, Tae-Woong
    • Journal of The Korean Society of Civil Engineers
    • /
    • v.33 no.5
    • /
    • pp.1861-1870
    • /
    • 2013
  • A climate change-driven increased hydrological variability has been widely acknowledged over the past decades. In this regards, rainfall simulation techniques are being applied in many countries to consider the increased variability. This study proposed a Homogeneous Hidden Markov Chain(HMM) designed to recognize rather complex patterns of rainfall with discrete hidden states and underlying distribution characteristics via mixture probability density function. The proposed approach was applied to Seoul and Jeonju station to verify model's performance. Statistical moments(e.g. mean, variance, skewness and kurtosis) derived by daily and seasonal rainfall were compared with observation. It was found that the proposed HMM showed better performance in terms of reproducing underlying distribution characteristics. Especially, the HMM was much better than the existing Markov Chain model in reproducing extremes. In this regard, the proposed HMM could be used to evaluate a long-term runoff and design flood as inputs.

A Study of An Efficiant Implementation for the m-th order Markov Hangul Information Source (m차 Markov 한글 정보원의 효율적인 구현에 관한 연구)

  • Nam, Ki-Dong;Hong, Jong-Joon;Kim, En-Dae;Lee, Kyoon-Ha
    • Annual Conference on Human and Language Technology
    • /
    • /
    • pp.267-278
    • /
    • 1991
  • 본 논문은 한글 정보원을 Markov source로 구현하였을 때 요구되는 terra byte 이상의 방대한 기억 공간의 점유를 해결하기 위해, 이에 대한 통계자료를 조사하고 이를 기초로 기억 공간을 줄일 수 있는 방안을 제안하였다. 제안된 방식에 의해 한글 정보원을 천이확률에 따라서 구현시 paged list 구조로 7차 이상의 Markov 한국어 정보원을 수백 Kbyte의 기억 공간으로 구현 할 수 있었다. 그리고, Markov 한국어 정보원의 활용도를 넓히기 위하여 backward Markov 정보원을 제안하였다. 본 연구에서 제안한 방법은 한글 문장에서 손실된 단어의 수정뿐만이 아니라 기타 Markov source를 한글에 적용하는 모든 분야에 기초적인 자료로 활용될 것으로 기대된다.

  • PDF

Korean Phoneme Recognition Using duration-dependent 3-State Hidden Markov Model (음소길이를 고려한 3-State Hidden Markov Model 에 의한 한국어 음소인식)

  • Yoo, H.-C.;Lee, H.-J.;Park, B.-C.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.8 no.1
    • /
    • pp.81-87
    • /
    • 1989
  • This paper discribes the method associated with modeling of Korean phonemes. Hidden Markov models(HMM's) may be viewed as an effective technique for modeling the inherent nonstationarity of speech signal. We propose a 3-state phoneme model to represent the sequentially changing characteristics of phonemes, i.e., transition-to-stationary-to-transition. Also we clarify that the duration of a phoneme is an important factor to have an effect in recognition accuracy and show that improvement in recognition rate can be obtained by using duration-dependent 3-state hidden Markov models.

  • PDF

Bayesian Analysis of Binary Non-homogeneous Markov Chain with Two Different Time Dependent Structures

  • Sung, Min-Je
    • Management Science and Financial Engineering
    • /
    • v.12 no.2
    • /
    • pp.19-35
    • /
    • 2006
  • We use the hierarchical Bayesian approach to describe the transition probabilities of a binary nonhomogeneous Markov chain. The Markov chain is used for describing the transition behavior of emotionally disturbed children in a treatment program. The effects of covariates on transition probabilities are assessed using a logit link function. To describe the time evolution of transition probabilities, we consider two modeling strategies. The first strategy is based on the concept of exchangeabiligy, whereas the second one is based on a first order Markov property. The deviance information criterion (DIC) measure is used to compare models with two different time dependent structures. The inferences are made using the Markov chain Monte Carlo technique. The developed methodology is applied to some real data.