• Title/Summary/Keyword: Temporal-Difference Learning

Search Result 34, Processing Time 0.021 seconds

Barycentric Approximator for Reinforcement Learning Control

  • Whang Cho
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.3 no.1
    • /
    • pp.33-42
    • /
    • 2002
  • Recently, various experiments to apply reinforcement learning method to the self-learning intelligent control of continuous dynamic system have been reported in the machine learning related research community. The reports have produced mixed results of some successes and some failures, and show that the success of reinforcement learning method in application to the intelligent control of continuous control systems depends on the ability to combine proper function approximation method with temporal difference methods such as Q-learning and value iteration. One of the difficulties in using function approximation method in connection with temporal difference method is the absence of guarantee for the convergence of the algorithm. This paper provides a proof of convergence of a particular function approximation method based on \"barycentric interpolator\" which is known to be computationally more efficient than multilinear interpolation .

Multi Colony Intensification.Diversification Interaction Ant Reinforcement Learning Using Temporal Difference Learning (Temporal Difference 학습을 이용한 다중 집단 강화.다양화 상호작용 개미 강화학습)

  • Lee Seung-Gwan
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.5
    • /
    • pp.1-9
    • /
    • 2005
  • In this paper, we suggest multi colony interaction ant reinforcement learning model. This method is a hybrid of multi colony interaction by elite strategy and reinforcement teaming applying Temporal Difference(TD) learning to Ant-Q loaming. Proposed model is consisted of some independent AS colonies, and interaction achieves search according to elite strategy(Intensification, Diversification strategy) between the colonies. Intensification strategy enables to select of good path to use heuristic information of other agent colony. This makes to select the high frequency of the visit of a edge by agents through positive interaction of between the colonies. Diversification strategy makes to escape selection of the high frequency of the visit of a edge by agents achieve negative interaction by search information of other agent colony. Through this strategies, we could know that proposed reinforcement loaming method converges faster to optimal solution than original ACS and Ant-Q.

  • PDF

Genetic Algorithm based Neural Network and Temporal Difference Learning: Janggi Board Game (유전자기반 신경회로망과 Temporal Difference학습: 장기보드게임)

  • 박인규
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.05c
    • /
    • pp.308-314
    • /
    • 2002
  • 본 논문은 2인용 보드게임의 정보에 대한 전략을 학습할 수 있는 방법을 유전자기반 역전파 신경회로망과 Temporal Difference학습알고리즘을 이용하여 제안하였다. 학습의 과정은 역전파에 의한 초기학습에 이어 국부해의 단점을 극복하기 위하여 미세학습으로 유전자알고리즘을 이용하였다. 시스템의 구성은 탐색을 담당하는 부분과 기물의 수를 발생하는 부분으로 구성되어 있다. 수의 발생부분은 보드의 상태에 따라서 갱신되고, 탐색커널은 αβ탐색을 기본으로 유전자알고리즘을 이용하여 가중치를 최적화하는 유전자기반 역전파 신경회로망과 TD학습을 결합하여 게임에 대해 양호한 평가함수를 학습하였다. 일반적으로 많은 학습을 통하여 평가함수의 정확도가 보장되면 승률이 학습의 양에 비례함을 알 수 있었다.

  • PDF

Minimize Order Picking Time through Relocation of Products in Warehouse Based on Reinforcement Learning (물품 출고 시간 최소화를 위한 강화학습 기반 적재창고 내 물품 재배치)

  • Kim, Yeojin;Kim, Geuntae;Lee, Jonghwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.2
    • /
    • pp.90-94
    • /
    • 2022
  • In order to minimize the picking time when the products are released from the warehouse, they should be located close to the exit when the products are released. Currently, the warehouse determines the loading location based on the order of the requirement of products, that is, the frequency of arrival and departure. Items with lower requirement ranks are loaded away from the exit, and items with higher requirement ranks are loaded closer from the exit. This is a case in which the delivery time is faster than the products located near the exit, even if the products are loaded far from the exit due to the low requirement ranking. In this case, there is a problem in that the transit time increases when the product is released. In order to solve the problem, we use the idle time of the stocker in the warehouse to rearrange the products according to the order of delivery time. Temporal difference learning method using Q_learning control, which is one of reinforcement learning types, was used when relocating items. The results of rearranging the products using the reinforcement learning method were compared and analyzed with the results of the existing method.

Max-Mean N-step Temporal-Difference Learning Using Multi-Step Return (멀티-스텝 누적 보상을 활용한 Max-Mean N-Step 시간차 학습)

  • Hwang, Gyu-Young;Kim, Ju-Bong;Heo, Joo-Seong;Han, Youn-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.5
    • /
    • pp.155-162
    • /
    • 2021
  • n-step TD learning is a combination of Monte Carlo method and one-step TD learning. If appropriate n is selected, n-step TD learning is known as an algorithm that performs better than Monte Carlo method and 1-step TD learning, but it is difficult to select the best values of n. In order to solve the difficulty of selecting the values of n in n-step TD learning, in this paper, using the characteristic that overestimation of Q can improve the performance of initial learning and that all n-step returns have similar values for Q ≈ Q*, we propose a new learning target, which is composed of the maximum and the mean of all k-step returns for 1 ≤ k ≤ n. Finally, in OpenAI Gym's Atari game environment, we compare the proposed algorithm with n-step TD learning and proved that the proposed algorithm is superior to n-step TD learning algorithm.

Capacitated Fab Scheduling Approximation using Average Reward TD(${\lambda}$) Learning based on System Feature Functions (시스템 특성함수 기반 평균보상 TD(${\lambda}$) 학습을 통한 유한용량 Fab 스케줄링 근사화)

  • Choi, Jin-Young
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.34 no.4
    • /
    • pp.189-196
    • /
    • 2011
  • In this paper, we propose a logical control-based actor-critic algorithm as an efficient approach for the approximation of the capacitated fab scheduling problem. We apply the average reward temporal-difference learning method for estimating the relative value functions of system states, while avoiding deadlock situation by Banker's algorithm. We consider the Intel mini-fab re-entrant line for the evaluation of the suggested algorithm and perform a numerical experiment by generating some sample system configurations randomly. We show that the suggested method has a prominent performance compared to other well-known heuristics.

The Analysis of Academic Achievement based on Spatio-Temporal Data Relate to e-Learning Patterns of University e-Learning Learners (대학 이러닝 학습자들의 학습 시·공간 패턴에 따른 학업성취도 차이 분석)

  • Lee, Hae-Deum;Nam, Min-Woo
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.4
    • /
    • pp.247-253
    • /
    • 2018
  • This study was designed to analyze the difference in attendance and academic achievement based on spatio-temporal data relate to e-Learning patterns of university e-Learning learners. This study collected e-Learning data from 68 e-Learning classes, 13,611 learners during 3 years. Collected data were analyzed by t-test and two-way ANOVA. Major study findings were as follows. Firstly, e-Learning learners in school received higher than those of learners outside school both in attendance and academic achievement, while that academic achievement showed statistical significance. Secondly, the attendance and academic achievement by the day was in the order of e-Learning learners mainly in the morning, those in the afternoon and those at night, in addition there was statistical significance. Lastly e-Learning learners in the weekdays appeared higher than those of learners in the weekends both in attendance and academic achievement, also both of them showed statistical significance.

Strategy of Reinforcement Learning in Artificial Life (인공생명의 연구에 있어서 강화학습의 전략)

  • 심귀보;박창현
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.05a
    • /
    • pp.257-260
    • /
    • 2001
  • 일반적으로 기계학습은 교사신호의 유무에 따라 교사학습과 비교사학습, 그리고 간접교사에 의한 강화학습으로 분류할 수 있다. 강화학습이란 용어는 원래 실험 심리학에서 동물의 학습방법 연구에서 비롯되었으나, 최근에는 공학 특히 인공생명분야에서 뉴럴 네트워크의 학습 알고리즘으로 많은 관심을 끌고 있다. 강화학습은 제어기 또는 에이전트의 행동에 대한 보상을 최대화하는 상태-행동 규칙이나 행동발생 전략을 찾아내는 것이다. 본 논문에서는 최근 많이 연구되고 있는 강화학습의 방법과 연구동향을 소개하고, 특히 인공생명 연구에 있어서 강하학습의 중요성을 역설한다.

  • PDF

(The Development of Janggi Board Game Using Backpropagation Neural Network and Q Learning Algorithm) (역전파 신경회로망과 Q학습을 이용한 장기보드게임 개발)

  • 황상문;박인규;백덕수;진달복
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.39 no.1
    • /
    • pp.83-90
    • /
    • 2002
  • This paper proposed the strategy learning method by means of the fusion of Back-Propagation neural network and Q learning algorithm for two-person, deterministic janggi board game. The learning process is accomplished simply through the playing each other. The system consists of two parts of move generator and search kernel. The one consists of move generator generating the moves on the board, the other consists of back-propagation and Q learning plus $\alpha$$\beta$ search algorithm in an attempt to learn the evaluation function. while temporal difference learns the discrepancy between the adjacent rewards, Q learning acquires the optimal policies even when there is no prior knowledge of effects of its moves on the environment through the learning of the evaluation function for the augmented rewards. Depended on the evaluation function through lots of games through the learning procedure it proved that the percentage won is linearly proportional to the portion of learning in general.

Function Approximation for Reinforcement Learning using Fuzzy Clustering (퍼지 클러스터링을 이용한 강화학습의 함수근사)

  • Lee, Young-Ah;Jung, Kyoung-Sook;Chung, Tae-Choong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.587-592
    • /
    • 2003
  • Many real world control problems have continuous states and actions. When the state space is continuous, the reinforcement learning problems involve very large state space and suffer from memory and time for learning all individual state-action values. These problems need function approximators that reason action about new state from previously experienced states. We introduce Fuzzy Q-Map that is a function approximators for 1 - step Q-learning and is based on fuzzy clustering. Fuzzy Q-Map groups similar states and chooses an action and refers Q value according to membership degree. The centroid and Q value of winner cluster is updated using membership degree and TD(Temporal Difference) error. We applied Fuzzy Q-Map to the mountain car problem and acquired accelerated learning speed.