• 제목/요약/키워드: Path Learning

검색결과 450건 처리시간 0.025초

Machine Learning Based Neighbor Path Selection Model in a Communication Network

  • Lee, Yong-Jin
    • International journal of advanced smart convergence
    • /
    • 제10권1호
    • /
    • pp.56-61
    • /
    • 2021
  • Neighbor path selection is to pre-select alternate routes in case geographically correlated failures occur simultaneously on the communication network. Conventional heuristic-based algorithms no longer improve solutions because they cannot sufficiently utilize historical failure information. We present a novel solution model for neighbor path selection by using machine learning technique. Our proposed machine learning neighbor path selection (ML-NPS) model is composed of five modules- random graph generation, data set creation, machine learning modeling, neighbor path prediction, and path information acquisition. It is implemented by Python with Keras on Tensorflow and executed on the tiny computer, Raspberry PI 4B. Performance evaluations via numerical simulation show that the neighbor path communication success probability of our model is better than that of the conventional heuristic by 26% on the average.

Path Planning for a Robot Manipulator based on Probabilistic Roadmap and Reinforcement Learning

  • Park, Jung-Jun;Kim, Ji-Hun;Song, Jae-Bok
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권6호
    • /
    • pp.674-680
    • /
    • 2007
  • The probabilistic roadmap (PRM) method, which is a popular path planning scheme, for a manipulator, can find a collision-free path by connecting the start and goal poses through a roadmap constructed by drawing random nodes in the free configuration space. PRM exhibits robust performance for static environments, but its performance is poor for dynamic environments. On the other hand, reinforcement learning, a behavior-based control technique, can deal with uncertainties in the environment. The reinforcement learning agent can establish a policy that maximizes the sum of rewards by selecting the optimal actions in any state through iterative interactions with the environment. In this paper, we propose efficient real-time path planning by combining PRM and reinforcement learning to deal with uncertain dynamic environments and similar environments. A series of experiments demonstrate that the proposed hybrid path planner can generate a collision-free path even for dynamic environments in which objects block the pre-planned global path. It is also shown that the hybrid path planner can adapt to the similar, previously learned environments without significant additional learning.

시뮬레이션 환경에서의 DQN을 이용한 강화 학습 기반의 무인항공기 경로 계획 (Path Planning of Unmanned Aerial Vehicle based Reinforcement Learning using Deep Q Network under Simulated Environment)

  • 이근형;김신덕
    • 반도체디스플레이기술학회지
    • /
    • 제16권3호
    • /
    • pp.127-130
    • /
    • 2017
  • In this research, we present a path planning method for an autonomous flight of unmanned aerial vehicles (UAVs) through reinforcement learning under simulated environment. We design the simulator for reinforcement learning of uav. Also we implement interface for compatibility of Deep Q-Network(DQN) and simulator. In this paper, we perform reinforcement learning through the simulator and DQN, and use Q-learning algorithm, which is a kind of reinforcement learning algorithms. Through experimentation, we verify performance of DQN-simulator. Finally, we evaluated the learning results and suggest path planning strategy using reinforcement learning.

  • PDF

Deep Q 학습 기반의 다중경로 시스템 경로 선택 알고리즘 (Path selection algorithm for multi-path system based on deep Q learning)

  • 정병창;박혜숙
    • 한국정보통신학회논문지
    • /
    • 제25권1호
    • /
    • pp.50-55
    • /
    • 2021
  • 다중경로 시스템은 유선망, LTE망, 위성망 등 다양한 망을 동시에 활용하여 데이터를 전송하는 시스템으로, 통신망의 전송속도, 신뢰도, 보안성 등을 높이기 위해 제안되었다. 본 논문에서는 이 시스템에서 각 망의 지연시간을 보상으로 하는 강화학습 기반 경로 선택 방안을 제안하고자 한다. 기존의 강화학습 모델과는 다르게, deep Q 학습을 이용하여 망의 변화하는 환경에 즉각적으로 대응하도록 알고리즘을 설계하였다. 네트워크 환경에서는 보상 정보를 일정 지연시간이 지나야 얻을 수 있으므로 이를 보정하는 방안 또한 함께 제안하였다. 성능을 평가하기 위해, 분산 데이터베이스와 텐서플로우 모듈 등을 포함한 테스트베드 학습 서버를 개발하였다. 시뮬레이션 결과, 제안 알고리즘이 RTT 감소 측면에서 최저 지연시간을 선택하는 방안보다 20% 가량 좋은 성능을 가지는 것을 확인하였다.

Path Loss Prediction Using an Ensemble Learning Approach

  • Beom Kwon;Eonsu Noh
    • 한국컴퓨터정보학회논문지
    • /
    • 제29권2호
    • /
    • pp.1-12
    • /
    • 2024
  • 경로 손실(Path Loss)을 예측하는 것은 셀룰러 네트워크(Cellular Network)에서 기지국(Base Station) 의 설치 위치 선정 등 무선망 설계에 중요한 요인 중 하나다. 기존에는 기지국의 최적 설치 위치를 결정하기 위해 수많은 현장 테스트(Field Tests)를 통해 경로 손실 값을 측정했다. 따라서 측정에 많은 시간이 소요된다는 단점이 있었다. 이러한 문제를 해결하기 위해 본 연구에서는 머신러닝(Machine Learning, ML) 기반의 경로 손실 예측 방법을 제안한다. 특히, 경로 손실 예측 성능을 향상시키기 위해서 앙상블 학습(Ensemble Learning) 접근법을 적용하였다. 부트스트랩 데이터 세트(Bootstrap Dataset)을 활용하여 서로 다른 하이퍼파라미터(Hyperparameter) 구성을 갖는 모델들을 얻고, 이 모델들을 앙상블하여 최종 모델을 구축했다. 인터넷상에 공개된 경로 손실 데이터 세트를 활용하여 제안하는 앙상블 기반 경로 손실 예측 방법과 다양한 ML 기반 방법들의 성능을 평가 및 비교했다. 실험 결과, 제안하는 방법이 기존 방법들보다 우수한 성능을 달성하였으며, 경로 손실 값을 가장 정확하게 예측할 수 있다는 것을 입증하였다.

A Study of Unmanned Aerial Vehicle Path Planning using Reinforcement Learning

  • Kim, Cheong Ghil
    • 반도체디스플레이기술학회지
    • /
    • 제17권1호
    • /
    • pp.88-92
    • /
    • 2018
  • Currently drone industry has become one of the fast growing markets and the technology for unmanned aerial vehicles are expected to continue to develop at a rapid rate. Especially small unmanned aerial vehicle systems have been designed and utilized for the various field with their own specific purposes. In these fields the path planning problem to find the shortest path between two oriented points is important. In this paper we introduce a path planning strategy for an autonomous flight of unmanned aerial vehicles through reinforcement learning with self-positioning technique. We perform Q-learning algorithm, a kind of reinforcement learning algorithm. At the same time, multi sensors of acceleraion sensor, gyro sensor, and magnetic are used to estimate the position. For the functional evaluation, the proposed method was simulated with virtual UAV environment and visualized the results. The flight history was based on a PX4 based drones system equipped with a smartphone.

Development of a Multi-criteria Pedestrian Pathfinding Algorithm by Perceptron Learning

  • Yu, Kyeonah;Lee, Chojung;Cho, Inyoung
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권12호
    • /
    • pp.49-54
    • /
    • 2017
  • Pathfinding for pedestrians provided by various navigation programs is based on a shortest path search algorithm. There is no big difference in their guide results, which makes the path quality more important. Multiple criteria should be included in the search cost to calculate the path quality, which is called a multi-criteria pathfinding. In this paper we propose a user adaptive pathfinding algorithm in which the cost function for a multi-criteria pathfinding is defined as a weighted sum of multiple criteria and the weights are learned automatically by Perceptron learning. Weight learning is implemented in two ways: short-term weight learning that reflects weight changes in real time as the user moves and long-term weight learning that updates the weights by the average value of the entire path after completing the movement. We use the weight update method with momentum for long-term weight learning, so that learning speed is improved and the learned weight can be stabilized. The proposed method is implemented as an app and is applied to various movement situations. The results show that customized pathfinding based on user preference can be obtained.

도시환경 매핑 시 SLAM 불확실성 최소화를 위한 강화 학습 기반 경로 계획법 (RL-based Path Planning for SLAM Uncertainty Minimization in Urban Mapping)

  • 조영훈;김아영
    • 로봇학회논문지
    • /
    • 제16권2호
    • /
    • pp.122-129
    • /
    • 2021
  • For the Simultaneous Localization and Mapping (SLAM) problem, a different path results in different SLAM results. Usually, SLAM follows a trail of input data. Active SLAM, which determines where to sense for the next step, can suggest a better path for a better SLAM result during the data acquisition step. In this paper, we will use reinforcement learning to find where to perceive. By assigning entire target area coverage to a goal and uncertainty as a negative reward, the reinforcement learning network finds an optimal path to minimize trajectory uncertainty and maximize map coverage. However, most active SLAM researches are performed in indoor or aerial environments where robots can move in every direction. In the urban environment, vehicles only can move following road structure and traffic rules. Graph structure can efficiently express road environment, considering crossroads and streets as nodes and edges, respectively. In this paper, we propose a novel method to find optimal SLAM path using graph structure and reinforcement learning technique.

목표지향적 강화학습 시스템 (Goal-Directed Reinforcement Learning System)

  • 이창훈
    • 한국인터넷방송통신학회논문지
    • /
    • 제10권5호
    • /
    • pp.265-270
    • /
    • 2010
  • 강화학습(reinforcement learning)은 동적 환경과 시행-착오를 통해 상호 작용하면서 학습을 수행한다. 그러므로 동적 환경에서 TD-학습과 TD(${\lambda}$)-학습과 같은 강화학습 방법들은 전통적인 통계적 학습 방법보다 더 빠르게 학습을 할 수 있다. 그러나 제안된 대부분의 강화학습 알고리즘들은 학습을 수행하는 에이전트(agent)가 목표 상태에 도달하였을 때만 강화 값(reinforcement value)이 주어지기 때문에 최적 해에 매우 늦게 수렴한다. 본 논문에서는 미로 환경(maze environment)에서 최단 경로를 빠르게 찾을 수 있는 강화학습 방법(GORLS : Goal-Directed Reinforcement Learning System)을 제안하였다. GDRLS 미로 환경에서 최단 경로가 될 수 있는 후보 상태들을 선택한다. 그리고 나서 최단 경로를 탐색하기 위해 후보 상태들을 학습한다. 실험을 통해, GDRLS는 미로 환경에서 TD-학습과 TD(${\lambda}$)-학습보다 더 빠르게 최단 경로를 탐색할 수 있음을 알 수 있다.

상태 공간 압축을 이용한 강화학습 (Reinforcement Learning Using State Space Compression)

  • 김병천;윤병주
    • 한국정보처리학회논문지
    • /
    • 제6권3호
    • /
    • pp.633-640
    • /
    • 1999
  • Reinforcement learning performs learning through interacting with trial-and-error in dynamic environment. Therefore, in dynamic environment, reinforcement learning method like Q-learning and TD(Temporal Difference)-learning are faster in learning than the conventional stochastic learning method. However, because many of the proposed reinforcement learning algorithms are given the reinforcement value only when the learning agent has reached its goal state, most of the reinforcement algorithms converge to the optimal solution too slowly. In this paper, we present COMREL(COMpressed REinforcement Learning) algorithm for finding the shortest path fast in a maze environment, select the candidate states that can guide the shortest path in compressed maze environment, and learn only the candidate states to find the shortest path. After comparing COMREL algorithm with the already existing Q-learning and Priortized Sweeping algorithm, we could see that the learning time shortened very much.

  • PDF