• Title/Summary/Keyword: Reinforcement learning

Search Result 786, Processing Time 0.03 seconds

Control for Manipulator of an Underwater Robot Using Meta Reinforcement Learning (메타강화학습을 이용한 수중로봇 매니퓰레이터 제어)

  • Moon, Ji-Youn;Moon, Jang-Hyuk;Bae, Sung-Hoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.95-100
    • /
    • 2021
  • This paper introduces model-based meta reinforcement learning as a control for the manipulator of an underwater construction robot. Model-based meta reinforcement learning updates the model fast using recent experience in a real application and transfers the model to model predictive control which computes control inputs of the manipulator to reach the target position. The simulation environment for model-based meta reinforcement learning is established using MuJoCo and Gazebo. The real environment of manipulator control for underwater construction robot is set to deal with model uncertainties.

Reinforcement learning Speedup method using Q-value Initialization (Q-value Initialization을 이용한 Reinforcement Learning Speedup Method)

  • 최정환
    • Proceedings of the IEEK Conference
    • /
    • 2001.06c
    • /
    • pp.13-16
    • /
    • 2001
  • In reinforcement teaming, Q-learning converges quite slowly to a good policy. Its because searching for the goal state takes very long time in a large stochastic domain. So I propose the speedup method using the Q-value initialization for model-free reinforcement learning. In the speedup method, it learns a naive model of a domain and makes boundaries around the goal state. By using these boundaries, it assigns the initial Q-values to the state-action pairs and does Q-learning with the initial Q-values. The initial Q-values guide the agent to the goal state in the early states of learning, so that Q-teaming updates Q-values efficiently. Therefore it saves exploration time to search for the goal state and has better performance than Q-learning. 1 present Speedup Q-learning algorithm to implement the speedup method. This algorithm is evaluated. in a grid-world domain and compared to Q-teaming.

  • PDF

Basin-Wide Multi-Reservoir Operation Using Reinforcement Learning (강화학습법을 이용한 유역통합 저수지군 운영)

  • Lee, Jin-Hee;Shim, Myung-Pil
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2006.05a
    • /
    • pp.354-359
    • /
    • 2006
  • The analysis of large-scale water resources systems is often complicated by the presence of multiple reservoirs and diversions, the uncertainty of unregulated inflows and demands, and conflicting objectives. Reinforcement learning is presented herein as a new approach to solving the challenging problem of stochastic optimization of multi-reservoir systems. The Q-Learning method, one of the reinforcement learning algorithms, is used for generating integrated monthly operation rules for the Keum River basin in Korea. The Q-Learning model is evaluated by comparing with implicit stochastic dynamic programming and sampling stochastic dynamic programming approaches. Evaluation of the stochastic basin-wide operational models considered several options relating to the choice of hydrologic state and discount factors as well as various stochastic dynamic programming models. The performance of Q-Learning model outperforms the other models in handling of uncertainty of inflows.

  • PDF

Learning Control of Inverted Pendulum Using Neural Networks (신경회로망을 이용한 도립전자의 학습제어)

  • Lee, Jea-Kang;Kim, Il-Hwan
    • Journal of Industrial Technology
    • /
    • v.24 no.A
    • /
    • pp.99-107
    • /
    • 2004
  • This paper considers reinforcement learning control with the self-organizing map. Reinforcement learning uses the observable states of objective system and signals from interaction of the system and the environments as input data. For fast learning in neural network training, it is necessary to reduce learning data. In this paper, we use the self-organizing map to parition the observable states. Partitioning states reduces the number of learning data which is used for training neural networks. And neural dynamic programming design method is used for the controller. For evaluating the designed reinforcement learning controller, an inverted pendulum of the cart system is simulated. The designed controller is composed of serial connection of self-organizing map and two Multi-layer Feed-Forward Neural Networks.

  • PDF

On-line Reinforcement Learning for Cart-pole Balancing Problem (카트-폴 균형 문제를 위한 실시간 강화 학습)

  • Kim, Byung-Chun;Lee, Chang-Hoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.4
    • /
    • pp.157-162
    • /
    • 2010
  • The cart-pole balancing problem is a pseudo-standard benchmark problem from the field of control methods including genetic algorithms, artificial neural networks, and reinforcement learning. In this paper, we propose a novel approach by using online reinforcement learning(OREL) to solve this cart-pole balancing problem. The objective is to analyze the learning method of the OREL learning system in the cart-pole balancing problem. Through experiment, we can see that approximate faster the optimal value-function than Q-learning.

Barycentric Approximator for Reinforcement Learning Control

  • Whang Cho
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.3 no.1
    • /
    • pp.33-42
    • /
    • 2002
  • Recently, various experiments to apply reinforcement learning method to the self-learning intelligent control of continuous dynamic system have been reported in the machine learning related research community. The reports have produced mixed results of some successes and some failures, and show that the success of reinforcement learning method in application to the intelligent control of continuous control systems depends on the ability to combine proper function approximation method with temporal difference methods such as Q-learning and value iteration. One of the difficulties in using function approximation method in connection with temporal difference method is the absence of guarantee for the convergence of the algorithm. This paper provides a proof of convergence of a particular function approximation method based on \"barycentric interpolator\" which is known to be computationally more efficient than multilinear interpolation .

Performance Enhancement of CSMA/CA MAC Protocol Based on Reinforcement Learning

  • Kim, Tae-Wook;Hwang, Gyung-Ho
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.1
    • /
    • pp.1-7
    • /
    • 2021
  • Reinforcement learning is an area of machine learning that studies how an intelligent agent takes actions in a given environment to maximize the cumulative reward. In this paper, we propose a new MAC protocol based on the Q-learning technique of reinforcement learning to improve the performance of the IEEE 802.11 wireless LAN CSMA/CA MAC protocol. Furthermore, the operation of each access point (AP) and station is proposed. The AP adjusts the value of the contention window (CW), which is the range for determining the backoff number of the station, according to the wireless traffic load. The station improves the performance by selecting an optimal backoff number with the lowest packet collision rate and the highest transmission success rate through Q-learning within the CW value transmitted from the AP. The result of the performance evaluation through computer simulations showed that the proposed scheme has a higher throughput than that of the existing CSMA/CA scheme.

Reinforcement learning multi-agent using unsupervised learning in a distributed cloud environment

  • Gu, Seo-Yeon;Moon, Seok-Jae;Park, Byung-Joon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.192-198
    • /
    • 2022
  • Companies are building and utilizing their own data analysis systems according to business characteristics in the distributed cloud. However, as businesses and data types become more complex and diverse, the demand for more efficient analytics has increased. In response to these demands, in this paper, we propose an unsupervised learning-based data analysis agent to which reinforcement learning is applied for effective data analysis. The proposal agent consists of reinforcement learning processing manager and unsupervised learning manager modules. These two modules configure an agent with k-means clustering on multiple nodes and then perform distributed training on multiple data sets. This enables data analysis in a relatively short time compared to conventional systems that perform analysis of large-scale data in one batch.

An Effective Adaptive Dialogue Strategy Using Reinforcement Loaming (강화 학습법을 이용한 효과적인 적응형 대화 전략)

  • Kim, Won-Il;Ko, Young-Joong;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.1
    • /
    • pp.33-40
    • /
    • 2008
  • In this paper, we propose a method to enhance adaptability in a dialogue system using the reinforcement learning that reduces response errors by trials and error-search similar to a human dialogue process. The adaptive dialogue strategy means that the dialogue system improves users' satisfaction and dialogue efficiency by loaming users' dialogue styles. To apply the reinforcement learning to the dialogue system, we use a main-dialogue span and sub-dialogue spans as the mathematic application units, and evaluate system usability by using features; success or failure, completion time, and error rate in sub-dialogue and the satisfaction in main-dialogue. In addition, we classify users' groups into beginners and experts to increase users' convenience in training steps. Then, we apply reinforcement learning policies according to users' groups. In the experiments, we evaluated the performance of the proposed method on the individual reinforcement learning policy and group's reinforcement learning policy.

ON THE STRUCTURE AND LEARNING OF NEURAL-NETWORK-BASED FUZZY LOGIC CONTROL SYSTEMS

  • C.T. Lin;Lee, C.S. George
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.993-996
    • /
    • 1993
  • This paper addresses the structure and its associated learning algorithms of a feedforward multi-layered connectionist network, which has distributed learning abilities, for realizing the basic elements and functions of a traditional fuzzy logic controller. The proposed neural-network-based fuzzy logic control system (NN-FLCS) can be contrasted with the traditional fuzzy logic control system in their network structure and learning ability. An on-line supervised structure/parameter learning algorithm dynamic learning algorithm can find proper fuzzy logic rules, membership functions, and the size of output fuzzy partitions simultaneously. Next, a Reinforcement Neural-Network-Based Fuzzy Logic Control System (RNN-FLCS) is proposed which consists of two closely integrated Neural-Network-Based Fuzzy Logic Controllers (NN-FLCS) for solving various reinforcement learning problems in fuzzy logic systems. One NN-FLC functions as a fuzzy predictor and the other as a fuzzy controller. As ociated with the proposed RNN-FLCS is the reinforcement structure/parameter learning algorithm which dynamically determines the proper network size, connections, and parameters of the RNN-FLCS through an external reinforcement signal. Furthermore, learning can proceed even in the period without any external reinforcement feedback.

  • PDF