• Title, Summary, Keyword: Markov

Search Result 2,212, Processing Time 0.038 seconds

Parametric Sensitivity Analysis of Markov Process Based RAM Model (Markov Process 기반 RAM 모델에 대한 파라미터 민감도 분석)

  • Kim, Yeong Seok;Hur, Jang Wook
    • Journal of the Korea Society of Systems Engineering
    • /
    • v.14 no.1
    • /
    • pp.44-51
    • /
    • 2018
  • The purpose of RAM analysis in weapon systems is to reduce life cycle costs, along with improving combat readiness by meeting RAM target value. We analyzed the sensitivity of the RAM analysis parameters to the use of the operating system by using the Markov Process based model (MPS, Markov Process Simulation) developed for RAM analysis. A Markov process-based RAM analysis model was developed to analyze the sensitivity of parameters (MTBF, MTTR and ALDT) to the utility of the 81mm mortar. The time required for the application to reach the steady state is about 15,000H, which is about 2 years, and the sensitivity of the parameter is highest for ALDT. In order to improve combat readiness, there is a need for continuous improvement in ALDT.

Bounding Methods for Markov Processes Based on Stochastic Monotonicity and Convexity (확률적 단조성과 콘벡스성을 이용한 마코프 프로세스에서의 범위한정 기법)

  • Yoon, Bok-Sik
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.17 no.1
    • /
    • pp.117-126
    • /
    • 1991
  • When {X(t), t ${\geq}$ 0} is a Markov process representing time-varying system states, we develop efficient bounding methods for some time-dependent performance measures. We use the discretization technique for stochastically monotone Markov processes and a combination of discretization and uniformization for Markov processes with the stochastic convexity(concavity) property. Sufficient conditions for stochastic monotonocity and stochastic convexity of a Markov process are also mentioned. A simple example is given to demonstrate the validity of the bounding methods.

  • PDF

System Replacement Policy for A Partially Observable Markov Decision Process Model

  • Kim, Chang-Eun
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.16 no.2
    • /
    • pp.1-9
    • /
    • 1990
  • The control of deterioration processes for which only incomplete state information is available is examined in this study. When the deterioration is governed by a Markov process, such processes are known as Partially Observable Markov Decision Processes (POMDP) which eliminate the assumption that the state or level of deterioration of the system is known exactly. This research investigates a two state partially observable Markov chain in which only deterioration can occur and for which the only actions possible are to replace or to leave alone. The goal of this research is to develop a new jump algorithm which has the potential for solving system problems dealing with continuous state space Markov chains.

  • PDF

Numerical Iteration for Stationary Probabilities of Markov Chains

  • Na, Seongryong
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.6
    • /
    • pp.513-520
    • /
    • 2014
  • We study numerical methods to obtain the stationary probabilities of continuous-time Markov chains whose embedded chains are periodic. The power method is applied to the balance equations of the periodic embedded Markov chains. The power method can have the convergence speed of exponential rate that is ambiguous in its application to original continuous-time Markov chains since the embedded chains are discrete-time processes. An illustrative example is presented to investigate the numerical iteration of this paper. A numerical study shows that a rapid and stable solution for stationary probabilities can be achieved regardless of periodicity and initial conditions.

Development of Statistical Downscaling Model Using Nonstationary Markov Chain (비정상성 Markov Chain Model을 이용한 통계학적 Downscaling 기법 개발)

  • Kwon, Hyun-Han;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.42 no.3
    • /
    • pp.213-225
    • /
    • 2009
  • A stationary Markov chain model is a stochastic process with the Markov property. Having the Markov property means that, given the present state, future states are independent of the past states. The Markov chain model has been widely used for water resources design as a main tool. A main assumption of the stationary Markov model is that statistical properties remain the same for all times. Hence, the stationary Markov chain model basically can not consider the changes of mean or variance. In this regard, a primary objective of this study is to develop a model which is able to make use of exogenous variables. The regression based link functions are employed to dynamically update model parameters given the exogenous variables, and the model parameters are estimated by canonical correlation analysis. The proposed model is applied to daily rainfall series at Seoul station having 46 years data from 1961 to 2006. The model shows a capability to reproduce daily and seasonal characteristics simultaneously. Therefore, the proposed model can be used as a short or mid-term prediction tool if elaborate GCM forecasts are used as a predictor. Also, the nonstationary Markov chain model can be applied to climate change studies if GCM based climate change scenarios are provided as inputs.

Prediction method of node movement using Markov Chain in DTN (DTN에서 Markov Chain을 이용한 노드의 이동 예측 기법)

  • Jeon, Il-kyu;Lee, Kang-whan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.5
    • /
    • pp.1013-1019
    • /
    • 2016
  • This paper describes a novel Context-awareness Markov Chain Prediction (CMCP) algorithm based on movement prediction using Markov chain in Delay Tolerant Network (DTN). The existing prediction models require additional information such as a node's schedule and delivery predictability. However, network reliability is lowered when additional information is unknown. To solve this problem, we propose a CMCP model based on node behaviour movement that can predict the mobility without requiring additional information such as a node's schedule or connectivity between nodes in periodic interval node behavior. The main contribution of this paper is the definition of approximate speed and direction for prediction scheme. The prediction of node movement forwarding path is made by manipulating the transition probability matrix based on Markov chain models including buffer availability and given interval time. We present simulation results indicating that such a scheme can be beneficial effects that increased the delivery ratio and decreased the transmission delay time of predicting movement path of the node in DTN.

Stochastic convexity in markov additive processes (마코프 누적 프로세스에서의 확률적 콘벡스성)

  • 윤복식
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • /
    • pp.147-159
    • /
    • 1991
  • Stochastic convexity(concvity) of a stochastic process is a very useful concept for various stochastic optimization problems. In this study we first establish stochastic convexity of a certain class of Markov additive processes through the probabilistic construction based on the sample path approach. A Markov additive process is obtained by integrating a functional of the underlying Markov process with respect to time, and its stochastic convexity can be utilized to provide efficient methods for optimal design or for optimal operation schedule of a wide range of stochastic systems. We also clarify the conditions for stochatic monotonicity of the Markov process, which is required for stochatic convexity of the Markov additive process. This result shows that stochastic convexity can be used for the analysis of probabilistic models based on birth and death processes, which have very wide application area. Finally we demonstrate the validity and usefulness of the theoretical results by developing efficient methods for the optimal replacement scheduling based on the stochastic convexity property.

  • PDF

Sensitivity of Conditions for Lumping Finite Markov Chains

  • Suh, Moon-Taek
    • Journal of the military operations research society of Korea
    • /
    • v.11 no.1
    • /
    • pp.111-129
    • /
    • 1985
  • Markov chains with large transition probability matrices occur in many applications such as manpowr models. Under certain conditions the state space of a stationary discrete parameter finite Markov chain may be partitioned into subsets, each of which may be treated as a single state of a smaller chain that retains the Markov property. Such a chain is said to be 'lumpable' and the resulting lumped chain is a special case of more general functions of Markov chains. There are several reasons why one might wish to lump. First, there may be analytical benefits, including relative simplicity of the reduced model and development of a new model which inherits known or assumed strong properties of the original model (the Markov property). Second, there may be statistical benefits, such as increased robustness of the smaller chain as well as improved estimates of transition probabilities. Finally, the identification of lumps may provide new insights about the process under investigation.

  • PDF

A Study on the Parameter Estimation for the Bit Synchronization Using the Gauss-Markov Estimator (Gauss-Markov 추정기를 이용한 비트 동기화를 위한 파라미터 추정에 관한 연구)

  • Ryu, Heung-Gyoon;Ann, Sou-Guil
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.3
    • /
    • pp.8-13
    • /
    • 1989
  • The parameters of bipolar random square-wave signal process, amplitude and phase with unknown probability distribution are shown to be simultaneously estimated by using Gauss-Markov estimator so that transmitted digital data can be recovered under the additive Gaussinan noise environment. However, we see that the preprocessing stage using the correlator composed of the multiplier and the running integrator is needed to convert the received process into the sampled sequences and to obtain the observed data vectors, which can be used for Gauss-Markov estimation.

  • PDF