• 제목/요약/키워드: Reinforcement learning

검색결과 785건 처리시간 0.027초

메타강화학습을 이용한 수중로봇 매니퓰레이터 제어 (Control for Manipulator of an Underwater Robot Using Meta Reinforcement Learning)

  • 문지윤;문장혁;배성훈
    • 한국전자통신학회논문지
    • /
    • 제16권1호
    • /
    • pp.95-100
    • /
    • 2021
  • 본 논문에서는 수중 건설 로봇을 제어하기 위한 모델 기반 메타 강화 학습 방법을 제안한다. 모델 기반 메타 강화 학습은 실제 응용 프로그램의 최근 경험을 사용하여 모델을 빠르게 업데이트한다. 다음으로, 대상 위치에 도달하기 위해 매니퓰레이터의 제어 입력을 계산하는 모델 예측 제어로 모델을 전송한다. MuJoCo 및 Gazebo를 사용하여 모델 기반 메타 강화 학습을 위한 시뮬레이션 환경을 구축하였으며 수중 건설 로봇의 실제 제어 환경에서의 모델 불확실성을 포함하여 제안한 방법을 검증하였다.

Q-value Initialization을 이용한 Reinforcement Learning Speedup Method (Reinforcement learning Speedup method using Q-value Initialization)

  • 최정환
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 하계종합학술대회 논문집(3)
    • /
    • pp.13-16
    • /
    • 2001
  • In reinforcement teaming, Q-learning converges quite slowly to a good policy. Its because searching for the goal state takes very long time in a large stochastic domain. So I propose the speedup method using the Q-value initialization for model-free reinforcement learning. In the speedup method, it learns a naive model of a domain and makes boundaries around the goal state. By using these boundaries, it assigns the initial Q-values to the state-action pairs and does Q-learning with the initial Q-values. The initial Q-values guide the agent to the goal state in the early states of learning, so that Q-teaming updates Q-values efficiently. Therefore it saves exploration time to search for the goal state and has better performance than Q-learning. 1 present Speedup Q-learning algorithm to implement the speedup method. This algorithm is evaluated. in a grid-world domain and compared to Q-teaming.

  • PDF

강화학습법을 이용한 유역통합 저수지군 운영 (Basin-Wide Multi-Reservoir Operation Using Reinforcement Learning)

  • 이진희;심명필
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2006년도 학술발표회 논문집
    • /
    • pp.354-359
    • /
    • 2006
  • The analysis of large-scale water resources systems is often complicated by the presence of multiple reservoirs and diversions, the uncertainty of unregulated inflows and demands, and conflicting objectives. Reinforcement learning is presented herein as a new approach to solving the challenging problem of stochastic optimization of multi-reservoir systems. The Q-Learning method, one of the reinforcement learning algorithms, is used for generating integrated monthly operation rules for the Keum River basin in Korea. The Q-Learning model is evaluated by comparing with implicit stochastic dynamic programming and sampling stochastic dynamic programming approaches. Evaluation of the stochastic basin-wide operational models considered several options relating to the choice of hydrologic state and discount factors as well as various stochastic dynamic programming models. The performance of Q-Learning model outperforms the other models in handling of uncertainty of inflows.

  • PDF

신경회로망을 이용한 도립전자의 학습제어 (Learning Control of Inverted Pendulum Using Neural Networks)

  • 이재강;김일환
    • 산업기술연구
    • /
    • 제24권A호
    • /
    • pp.99-107
    • /
    • 2004
  • This paper considers reinforcement learning control with the self-organizing map. Reinforcement learning uses the observable states of objective system and signals from interaction of the system and the environments as input data. For fast learning in neural network training, it is necessary to reduce learning data. In this paper, we use the self-organizing map to parition the observable states. Partitioning states reduces the number of learning data which is used for training neural networks. And neural dynamic programming design method is used for the controller. For evaluating the designed reinforcement learning controller, an inverted pendulum of the cart system is simulated. The designed controller is composed of serial connection of self-organizing map and two Multi-layer Feed-Forward Neural Networks.

  • PDF

카트-폴 균형 문제를 위한 실시간 강화 학습 (On-line Reinforcement Learning for Cart-pole Balancing Problem)

  • 김병천;이창훈
    • 한국인터넷방송통신학회논문지
    • /
    • 제10권4호
    • /
    • pp.157-162
    • /
    • 2010
  • Cart-pole 균형 문제는 유전자 알고리즘, 인공신경망, 강화학습 등을 이용한 제어 전략 분야의 표준 문제이다. 본 논문에서는 cart-pole 균형문제를 해결하기 위해 실시간 강화 학습을 이용한 접근 방법을 제안하였다. 본 논문의 목적은 cart-pole 균형 문제에서 OREL 학습 시스템의 학습 방법을 분석하는데 있다. 실험을 통해, 본 논문에서 제안한 OREL 학습 방법은 Q-학습보다 최적 값 함수에 더 빠르게 접근함을 알 수 있었다.

Barycentric Approximator for Reinforcement Learning Control

  • Whang Cho
    • International Journal of Precision Engineering and Manufacturing
    • /
    • 제3권1호
    • /
    • pp.33-42
    • /
    • 2002
  • Recently, various experiments to apply reinforcement learning method to the self-learning intelligent control of continuous dynamic system have been reported in the machine learning related research community. The reports have produced mixed results of some successes and some failures, and show that the success of reinforcement learning method in application to the intelligent control of continuous control systems depends on the ability to combine proper function approximation method with temporal difference methods such as Q-learning and value iteration. One of the difficulties in using function approximation method in connection with temporal difference method is the absence of guarantee for the convergence of the algorithm. This paper provides a proof of convergence of a particular function approximation method based on \"barycentric interpolator\" which is known to be computationally more efficient than multilinear interpolation .

Performance Enhancement of CSMA/CA MAC Protocol Based on Reinforcement Learning

  • Kim, Tae-Wook;Hwang, Gyung-Ho
    • Journal of information and communication convergence engineering
    • /
    • 제19권1호
    • /
    • pp.1-7
    • /
    • 2021
  • Reinforcement learning is an area of machine learning that studies how an intelligent agent takes actions in a given environment to maximize the cumulative reward. In this paper, we propose a new MAC protocol based on the Q-learning technique of reinforcement learning to improve the performance of the IEEE 802.11 wireless LAN CSMA/CA MAC protocol. Furthermore, the operation of each access point (AP) and station is proposed. The AP adjusts the value of the contention window (CW), which is the range for determining the backoff number of the station, according to the wireless traffic load. The station improves the performance by selecting an optimal backoff number with the lowest packet collision rate and the highest transmission success rate through Q-learning within the CW value transmitted from the AP. The result of the performance evaluation through computer simulations showed that the proposed scheme has a higher throughput than that of the existing CSMA/CA scheme.

Reinforcement learning multi-agent using unsupervised learning in a distributed cloud environment

  • Gu, Seo-Yeon;Moon, Seok-Jae;Park, Byung-Joon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권2호
    • /
    • pp.192-198
    • /
    • 2022
  • Companies are building and utilizing their own data analysis systems according to business characteristics in the distributed cloud. However, as businesses and data types become more complex and diverse, the demand for more efficient analytics has increased. In response to these demands, in this paper, we propose an unsupervised learning-based data analysis agent to which reinforcement learning is applied for effective data analysis. The proposal agent consists of reinforcement learning processing manager and unsupervised learning manager modules. These two modules configure an agent with k-means clustering on multiple nodes and then perform distributed training on multiple data sets. This enables data analysis in a relatively short time compared to conventional systems that perform analysis of large-scale data in one batch.

강화 학습법을 이용한 효과적인 적응형 대화 전략 (An Effective Adaptive Dialogue Strategy Using Reinforcement Loaming)

  • 김원일;고영중;서정연
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제35권1호
    • /
    • pp.33-40
    • /
    • 2008
  • 인간은 다른 사람과 대화할 때, 시행착오 과정을 거치면서 상대방에 관한 학습이 일어난다. 본 논문에서는 이런 과정의 강화학습법(Reinforcement Learning)을 이용하여 대화시스템에 적응형 능력의 부여 방법을 제안한다. 적응형 대화 전략이란 대화시스템이 사용자의 대화 처리 습성을 학습하고, 사용자 만족도와 효율성을 높이는 것을 말한다. 강화 학습법을 효율적으로 대화처리 시스템에 적용하기 위하여 대화를 주 대화와 부대화로 나누어 정의하고 사용하였다. 주 대화에서는 전체적인 만족도를, 부 대화에서는 완료 여부, 완료시간, 에러 횟수를 이용해서 시스템의 효율성을 측정하였다. 또한 학습 과정에서의 사용자 편의성을 위하여 시스템 사용 역량에 따라 사용자를 두 그룹으로 분류한 후 해당 그룹의 강화 학습 훈련 정책을 적용하였다. 실험에서는 개인별, 그룹별 강화 학습에 따라 제안한 방법의 성능을 평가하였다.

ON THE STRUCTURE AND LEARNING OF NEURAL-NETWORK-BASED FUZZY LOGIC CONTROL SYSTEMS

  • C.T. Lin;Lee, C.S. George
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.993-996
    • /
    • 1993
  • This paper addresses the structure and its associated learning algorithms of a feedforward multi-layered connectionist network, which has distributed learning abilities, for realizing the basic elements and functions of a traditional fuzzy logic controller. The proposed neural-network-based fuzzy logic control system (NN-FLCS) can be contrasted with the traditional fuzzy logic control system in their network structure and learning ability. An on-line supervised structure/parameter learning algorithm dynamic learning algorithm can find proper fuzzy logic rules, membership functions, and the size of output fuzzy partitions simultaneously. Next, a Reinforcement Neural-Network-Based Fuzzy Logic Control System (RNN-FLCS) is proposed which consists of two closely integrated Neural-Network-Based Fuzzy Logic Controllers (NN-FLCS) for solving various reinforcement learning problems in fuzzy logic systems. One NN-FLC functions as a fuzzy predictor and the other as a fuzzy controller. As ociated with the proposed RNN-FLCS is the reinforcement structure/parameter learning algorithm which dynamically determines the proper network size, connections, and parameters of the RNN-FLCS through an external reinforcement signal. Furthermore, learning can proceed even in the period without any external reinforcement feedback.

  • PDF