• Title/Summary/Keyword: Markov

Search Result 2,411, Processing Time 0.036 seconds

Prediction of Future Land use Using Times Series Landsat Images Based on CA (Cellular Automata)-Markov Technique (시계열 Landsat 영상과 CA-Markov기법을 이용한 미래 토지이용 변화 예측)

  • Lee, Yong-Jun;Pack, Geun-Ae;Kim, Seong-Joon
    • Proceedings of the KSRS Conference
    • /
    • 2007.03a
    • /
    • pp.55-60
    • /
    • 2007
  • The purpose of this study is to evaluate the temporal land cover change by gradual urbanization of Gyeongan-cheon watershed. This study used the five land use of Landsat TM satellite images(l987, 1991, 2001, 2004) which were classified by maximum likelihood method. The five land use maps examine its accuracy by error matrix and administrative district statistics. This study analyze land use patterns in the past using time.series Landsat satellite images, and predict 2004 year land use using a CA-Markov combined CA(Cellular Automata) and Markov process, and examine its appropriateness. Finally, predict 2030, 2060 year land use maps by CA-Markov model were constructed from the classified images.

  • PDF

A Study of Image Target Tracking Using ITS in an Occluding Environment (표적이 일시적으로 가려지는 환경에서 ITS 기법을 이용한 영상 표적 추적 알고리듬 연구)

  • Kim, Yong;Song, Taek-Lyul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.4
    • /
    • pp.306-314
    • /
    • 2013
  • Automatic tracking in cluttered environment requires the initiation and maintenance of tracks, and track existence probability of true track is kept by Markov Chain Two model of target existence propagation. Unlike Markov Chain One model for target existence propagation, Markov Chain Two model is made up three hypotheses about target existence event which are that the target exist and is detectable, the target exists and is non-detectable through occlusion, and the target does not exist and is non-detectable according to non-existing target. In this paper we present multi-scan single target tracking algorithm based on the target existence, which call the Integrated Track Splitting algorithm with Markov Chain Two model in imaging sensor.

Codebook design for subspace distribution clustering hidden Markov model (Subspace distribution clustering hidden Markov model을 위한 codebook design)

  • Cho, Young-Kyu;Yook, Dong-Suk
    • Proceedings of the KSPS conference
    • /
    • 2005.04a
    • /
    • pp.87-90
    • /
    • 2005
  • Today's state-of the-art speech recognition systems typically use continuous distribution hidden Markov models with the mixtures of Gaussian distributions. To obtain higher recognition accuracy, the hidden Markov models typically require huge number of Gaussian distributions. Such speech recognition systems have problems that they require too much memory to run, and are too slow for large applications. Many approaches are proposed for the design of compact acoustic models. One of those models is subspace distribution clustering hidden Markov model. Subspace distribution clustering hidden Markov model can represent original full-space distributions as some combinations of a small number of subspace distribution codebooks. Therefore, how to make the codebook is an important issue in this approach. In this paper, we report some experimental results on various quantization methods to make more accurate models.

  • PDF

Application Markov State Model for the RCM of Combustion Turbine Generating Unit (Markov State Model을 이용한 복합화력 발전설비의 최적의 유지보수계획 수립)

  • Lee, Seung-Hyuk;Shin, Jun-Seok;Kim, Jin-O
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.2
    • /
    • pp.248-253
    • /
    • 2007
  • Traditional time based preventive maintenance is used to constant maintenance interval for equipment life. In order to consider economic aspect for time based preventive maintenance, preventive maintenance is scheduled by RCM(Reliability-Centered Maintenance) evaluation. So, Markov state model is utilized considering stochastic state in RCM. In this paper, a Markov state model which can be used for scheduling and optimization of maintenance is presented. The deterioration process of system condition is modeled by a Markov model. In case study, simulation results about RCM are used to the real historical data of combustion turbine generating units in Korean power systems.

Stochastic convexity in markov additive processes (마코프 누적 프로세스에서의 확률적 콘벡스성)

  • 윤복식
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1991.10a
    • /
    • pp.147-159
    • /
    • 1991
  • Stochastic convexity(concvity) of a stochastic process is a very useful concept for various stochastic optimization problems. In this study we first establish stochastic convexity of a certain class of Markov additive processes through the probabilistic construction based on the sample path approach. A Markov additive process is obtained by integrating a functional of the underlying Markov process with respect to time, and its stochastic convexity can be utilized to provide efficient methods for optimal design or for optimal operation schedule of a wide range of stochastic systems. We also clarify the conditions for stochatic monotonicity of the Markov process, which is required for stochatic convexity of the Markov additive process. This result shows that stochastic convexity can be used for the analysis of probabilistic models based on birth and death processes, which have very wide application area. Finally we demonstrate the validity and usefulness of the theoretical results by developing efficient methods for the optimal replacement scheduling based on the stochastic convexity property.

  • PDF

Equivalent Transformations of Undiscounted Nonhomogeneous Markov Decision Processes

  • Park, Yun-Sun
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.17 no.2
    • /
    • pp.131-144
    • /
    • 1992
  • Even though nonhomogeneous Markov Decision Processes subsume homogeneous Markov Decision Processes and are more practical in the real world, there are many results for them. In this paper we address the nonhomogeneous Markov Decision Process with objective to maximize average reward. By extending works of Ross [17] in the homogeneous case adopting the result of Bean and Smith [3] for the dicounted deterministic problem, we first transform the original problem into the discounted nonhomogeneous Markov Decision Process. Then, secondly, we transform into the discounted deterministic problem. This approach not only shows the interrelationships between various problems but also attacks the solution method of the undiscounted nohomogeneous Markov Decision Process.

  • PDF

An Improved Reinforcement Learning Technique for Mission Completion (임무수행을 위한 개선된 강화학습 방법)

  • 권우영;이상훈;서일홍
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.9
    • /
    • pp.533-539
    • /
    • 2003
  • Reinforcement learning (RL) has been widely used as a learning mechanism of an artificial life system. However, RL usually suffers from slow convergence to the optimum state-action sequence or a sequence of stimulus-response (SR) behaviors, and may not correctly work in non-Markov processes. In this paper, first, to cope with slow-convergence problem, if some state-action pairs are considered as disturbance for optimum sequence, then they no to be eliminated in long-term memory (LTM), where such disturbances are found by a shortest path-finding algorithm. This process is shown to let the system get an enhanced learning speed. Second, to partly solve a non-Markov problem, if a stimulus is frequently met in a searching-process, then the stimulus will be classified as a sequential percept for a non-Markov hidden state. And thus, a correct behavior for a non-Markov hidden state can be learned as in a Markov environment. To show the validity of our proposed learning technologies, several simulation result j will be illustrated.

A Study on Markov Chains Applied to informetrics (마코프모형의 계량정보학적 응용연구)

  • Moon, Kyung-Hwa
    • Journal of Information Management
    • /
    • v.30 no.2
    • /
    • pp.31-52
    • /
    • 1999
  • This paper is done by studying two experimental cases which utilize the stochastic theory of Markov Chains, which is used for forecasting the future and by analyzing recent trend of studies. Since the study of Markov Chains is not applied to the Informetrics to a high degree in Korea. It is also proposed that there is a necessity for further study on Markov Chains and its activation.

  • PDF

Enhanced Markov-Difference Based Power Consumption Prediction for Smart Grids

  • Le, Yiwen;He, Jinghan
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.3
    • /
    • pp.1053-1063
    • /
    • 2017
  • Power prediction is critical to improve power efficiency in Smart Grids. Markov chain provides a useful tool for power prediction. With careful investigation of practical power datasets, we find an interesting phenomenon that the stochastic property of practical power datasets does not follow the Markov features. This mismatch affects the prediction accuracy if directly using Markov prediction methods. In this paper, we innovatively propose a spatial transform based data processing to alleviate this inconsistency. Furthermore, we propose an enhanced power prediction method, named by Spatial Mapping Markov-Difference (SMMD), to guarantee the prediction accuracy. In particular, SMMD adopts a second prediction adjustment based on the differential data to reduce the stochastic error. Experimental results validate that the proposed SMMD achieves an improvement in terms of the prediction accuracy with respect to state-of-the-art solutions.

Performance Evaluation of the WiMAX Network Based on Combining the 2D Markov Chain and MMPP Traffic Model

  • Saha, Tonmoy;Shufean, Md. Abu;Alam, Mahbubul;Islam, Md. Imdadul
    • Journal of Information Processing Systems
    • /
    • v.7 no.4
    • /
    • pp.653-678
    • /
    • 2011
  • WiMAX is intended for fourth generation wireless mobile communications where a group of users are provided with a connection and a fixed length queue. In present literature traffic of such network is analyzed based on the generator matrix of the Markov Arrival Process (MAP). In this paper a simple analytical technique of the two dimensional Markov chain is used to obtain the trajectory of the congestion of the network as a function of a traffic parameter. Finally, a two state phase dependent arrival process is considered to evaluate probability states. The entire analysis is kept independent of modulation and coding schemes.