• Title/Summary/Keyword: Markov

Search Result 2,413, Processing Time 0.029 seconds

Image analysis using a markov random field and TMS320C80(MVP) (TMS320C80(MVP)과 markov random field를 이용한 영상해석)

  • 백경석;정진현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1722-1725
    • /
    • 1997
  • This paper presents image analysis method using a Markov random field(MRF) model. Particulary, image esgmentation is to partition the given image into regions. This scheme is first segmented into regions, and the obtained domain knowledge is used to obtain the improved segmented image by a Markov random field model. The method is a maximum a posteriori(MAP) estimation with the MRF model and its associated Gibbs distribution. MAP estimation method is applied to capture the natural image by TMS320C80(MVP) and to realize the segmented image by a MRF model.

  • PDF

System Availability Analysis using Markov Process (Markov Process를 활용한 시스템 가용도 분석 연구)

  • Kim, Han Sol;Kim, Bo Hyeon;Hur, Jang Wook
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.14 no.1
    • /
    • pp.36-43
    • /
    • 2018
  • The availability of the weapon system can be analyzed through state modeling and simulation using the Markov process. In this paper, show how to analyze the availability of the weapon system and can use the Markov process to analyze the system's steady state as well as the RAM at a transient state in time. As a result of the availability analysis of tracked vehicles, the inherent availability was 2.6% and the operational availability was 1.2% The validity criterion was defined as the case where the difference was within 3%, and thus it was judged to be valid. We have identified the faulty items through graphs of the number of visits per state among the results obtained through the MPS and can use them to provide design alternatives.

System Replacement Policy for A Partially Observable Markov Decision Process Model

  • Kim, Chang-Eun
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.16 no.2
    • /
    • pp.1-9
    • /
    • 1990
  • The control of deterioration processes for which only incomplete state information is available is examined in this study. When the deterioration is governed by a Markov process, such processes are known as Partially Observable Markov Decision Processes (POMDP) which eliminate the assumption that the state or level of deterioration of the system is known exactly. This research investigates a two state partially observable Markov chain in which only deterioration can occur and for which the only actions possible are to replace or to leave alone. The goal of this research is to develop a new jump algorithm which has the potential for solving system problems dealing with continuous state space Markov chains.

  • PDF

Application Markov State Model for the RCM of Combustion turbine Generating Unit (Markov State Model을 이용한 복합화력 발전설비의 최적의 유지보수계획 수립)

  • Shin, Jun-Seok;Lee, Seung-Hyuk;Kim, Jin-O
    • Proceedings of the KIEE Conference
    • /
    • 2006.11a
    • /
    • pp.357-359
    • /
    • 2006
  • Traditional time based preventive maintenance is used to constant maintenance interval for equipment life. In order to consider economic aspect for time based preventive maintenance, preventive maintenance is scheduled by RCM(Reliability-Centered Maintenance) evaluation. So, Markov state model is utilized considering stochastic state in RCM. In this paper, a Markov state model which can be used for scheduling and optimization of maintenance is presented. The deterioration process of system condition is modeled by a Markov model. In case study, simulation results about RCM are used to the real historical data of combustion turbine generating units in Korean power systems.

  • PDF

SOME GENERALIZED SHANNON-MCMILLAN THEOREMS FOR NONHOMOGENEOUS MARKOV CHAINS ON SECOND-ORDER GAMBLING SYSTEMS INDEXED BY AN INFINITE TREE WITH UNIFORMLY BOUNDED DEGREE

  • Wang, Kangkang;Xu, Zurun
    • Journal of applied mathematics & informatics
    • /
    • v.30 no.1_2
    • /
    • pp.83-92
    • /
    • 2012
  • In this paper, a generalized Shannon-McMillan theorem for the nonhomogeneous Markov chains indexed by an infinite tree which has a uniformly bounded degree is discussed by constructing a nonnegative martingale and analytical methods. As corollaries, some Shannon-Mcmillan theorems for the nonhomogeneous Markov chains indexed by a homogeneous tree and the nonhomogeneous Markov chain are obtained. Some results which have been obtained are extended.

A Probabilistic Analysis for Fatigue Cumulative Damage and Fatigue Life in CFRP Composites Containing a Circular Hole (원공을 가진 CFRP 복합재료의 피로누적손상 및 피로수명에 대한 확률적 해석)

  • 김정규;김도식
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.19 no.8
    • /
    • pp.1915-1926
    • /
    • 1995
  • The Fatigue characteristics of 8-harness satin woven CFRP composites with a circular hole are experimentally investigated under constant amplitude tension-tension loading. It is found in this study that the fatigue damage accumulation behavior is very random and history-independent, and the fatigue cumulative damage is linearly related with the mean number of cycles to a specified damage state. From these results, it is known that the fatigue characteristics of CFRP composites satisfy the basic assumptions of Markov chain theory and the parameter of Markov chain model can be determined only by mean and variance of fatigue lives. The predicted distribution of the fatigue cumulative damage using Markov chain model shows a good agreement with the test results. For the fatigue life distribution, Markov chain model makes similar accuracy to 2-parameter Weibull distribution function.

Markov 과정을 이용한 디지탈 교환기의 신뢰도 모형

  • Sin, Seong-Mun;Choe, Tae-Gu;Lee, Dae-Gi
    • ETRI Journal
    • /
    • v.5 no.2
    • /
    • pp.3-8
    • /
    • 1983
  • This paper derives the Markov model to calculate the reliability of the Digital Switching System being developed by KETRI. Using the failure states extracted from the system in the course of the modelling, we calculated the reliability of both the service grade and the function of the system. Especially, by including the repair rate into the model, we took optimum advantage of theMarkov process and solved the difficulties in the calculation by reducing the number of states of the system.

  • PDF

Improved MCMC Simulation for Low-Dimensional Multi-Modal Distributions

  • Ji, Hyunwoong;Lee, Jaewook;Kim, Namhyoung
    • Management Science and Financial Engineering
    • /
    • v.19 no.2
    • /
    • pp.49-53
    • /
    • 2013
  • A Markov-chain Monte Carlo sampling algorithm samples a new point around the latest sample due to the Markov property, which prevents it from sampling from multi-modal distributions since the corresponding chain often fails to search entire support of the target distribution. In this paper, to overcome this problem, mode switching scheme is applied to the conventional MCMC algorithms. The algorithm separates the reducible Markov chain into several mutually exclusive classes and use mode switching scheme to increase mixing rate. Simulation results are given to illustrate the algorithm with promising results.

On The Mathematical Structure of Markov Process and Markovian Sequential Decision Process (Markov 과정(過程)의 수리적(數理的) 구조(構造)와 그 축차결정과정(逐次決定過程))

  • Kim, Yu-Song
    • Journal of Korean Society for Quality Management
    • /
    • v.11 no.2
    • /
    • pp.2-9
    • /
    • 1983
  • As will be seen, this paper is tries that the research on the mathematical structure of Markov process and Markovian sequential decision process (the policy improvement iteration method,) moreover, that it analyze the logic and the characteristic of behavior of mathematical model of Markov process. Therefore firstly, it classify, on research of mathematical structure of Markov process, the forward equation and backward equation of Chapman-kolmogorov equation and of kolmogorov differential equation, and then have survey on logic of equation systems or on the question of uniqueness and existence of solution of the equation. Secondly, it classify, at the Markovian sequential decision process, the case of discrete time parameter and the continuous time parameter, and then it explore the logic system of characteristic of the behavior, the value determination operation and the policy improvement routine.

  • PDF