• Title/Summary/Keyword: Action Selection Mechanism

Search Result 23, Processing Time 0.036 seconds

A Novel Action Selection Mechanism for Intelligent Service Robots

  • Suh, Il-Hong;Kwon, Woo-Young;Lee, Sang-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2027-2032
    • /
    • 2003
  • For action selection as well as learning, simple associations between stimulus and response have been employed in most of literatures. But, for a successful task accomplishment, it is required that an animat can learn and express behavioral sequences. In this paper, we propose a novel action-selection-mechanism to deal with sequential behaviors. For this, we define behavioral motivation as a primitive node for action selection, and then hierarchically construct a network with behavioral motivations. The vertical path of the network represents behavioral sequences. Here, such a tree for our proposed ASM can be newly generated and/or updated, whenever a new sequential behaviors is learned. To show the validity of our proposed ASM, three 2-D grid world simulations will be illustrated.

  • PDF

An Action Selection Mechanism and Learning Algorithm for Intelligent Robot (지능로봇을 위한 행동선택 및 학습구조)

  • Yoon, Young-Min;Lee, Sang-Hoon;Suh, Il-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.496-498
    • /
    • 2004
  • An action-selection-mechanism is proposed to deal with sequential behaviors, where associations between some of stimulus and behaviors will be learned by a shortest-path-finding-based reinforcement team ins technique. To be specific, we define behavioral motivation as a primitive node for action selection, and then sequentially construct a network with behavioral motivations. The vertical path of the network represents a behavioral sequence. Here, such a tree fur our proposed ASM can be newly generated and/or updated. whenever a new sequential behaviors is learned. To show the validity of our proposed ASM, some experimental results on a "pushing-box-into-a-goal task" of a mobile robot will be illustrated.

  • PDF

Teaching-based Perception-Action Learning under an Ethology-based Action Selection Mechanism (동물 행동학 기반 행동 선택 메커니즘하에서의 교시 기반 행동 학습 방법)

  • Moon, Ji-Sub;Lee, Sang-Hyoung;Suh, Il-Hong
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1147-1148
    • /
    • 2008
  • In this paper, we propose action-learning method based on teaching. By adopting this method, we can handle an exception case which cannot be handled in an Ethology-based Action SElection mechanism. Our proposed method is verified by employing AIBO robot as well as EASE platform.

  • PDF

A Motivation-Based Action-Selection-Mechanism Involving Reinforcement Learning

  • Lee, Sang-Hoon;Suh, Il-Hong;Kwon, Woo-Young
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.6
    • /
    • pp.904-914
    • /
    • 2008
  • An action-selection-mechanism(ASM) has been proposed to work as a fully connected finite state machine to deal with sequential behaviors as well as to allow a state in the task program to migrate to any state in the task, in which a primitive node in association with a state and its transitional conditions can be easily inserted/deleted. Also, such a primitive node can be learned by a shortest path-finding-based reinforcement learning technique. Specifically, we define a behavioral motivation as having state-dependent value as a primitive node for action selection, and then sequentially construct a network of behavioral motivations in such a way that the value of a parent node is allowed to flow into a child node by a releasing mechanism. A vertical path in a network represents a behavioral sequence. Here, such a tree for our proposed ASM can be newly generated and/or updated whenever a new behavior sequence is learned. To show the validity of our proposed ASM, experimental results of a mobile robot performing the task of pushing- a- box-in to- a-goal(PBIG) will be illustrated.

Action Selection Mechanism for Artificial Life System (인공생명체를 위한 행동선택 구조)

  • Kim, Min-Jo;Kwon, Woo-Young;Lee, Sang-Hoon;Suh, Il-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2002.11c
    • /
    • pp.178-182
    • /
    • 2002
  • For action selection as well as teaming, simple associations between stimulus and response have been employed in most of literatures. But, for successful task accomplishment, it is required that artificial life system can team and express behavioral sequences. In this paper, we propose a novel action-selection-mechanism to deal with behavioral sequences. For this, we define behavioral motivation as a primitive node for action selection, and then hierarchically construct a tree with behavioral motivations. The vertical path of the tree represents behavioral sequences. Here, such a tree for our proposed ASM can be newly generated and/or updated, whenever a new behavioral sequence is learned. To show the validity of our proposed ASM, three 2-D grid world simulations will be illustrated.

  • PDF

Motivation-Based Action Selection Mechanism with Bayesian Affordance Models for Intelligence Robot (지능로봇의 동기 기반 행동선택을 위한 베이지안 행동유발성 모델)

  • Son, Gwang-Hee;Lee, Sang-Hyoung;Huh, Il-Hong
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.264-266
    • /
    • 2009
  • A skill is defined as the special ability to do something well, especially as acquired by learning and practice. To learn a skill, a Bayesian network model for representing the skill is first learned. We will regard the Bayesian network for a skill as an affordance. We propose a soft Behavior Motivation(BM) switch as a method for ordering affordances to accomplish a task. Then, a skill is constructed as a combination of an affordance and a soft BM switch. To demonstrate the validity of our proposed method, some experiments were performed with GENIBO(Pet robot) performing a task using skills of Search-a-target-object, Approach-a-target-object, Push-up-in front of -a-target-object.

  • PDF

Intelligent Robot Design: Intelligent Agent Based Approach (지능로봇: 지능 에이전트를 기초로 한 접근방법)

  • Kang, Jin-Shig
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.457-467
    • /
    • 2004
  • In this paper, a robot is considered as an agent, a structure of robot is presented which consisted by multi-subagents and they have diverse capacity such as perception, intelligence, action etc., required for robot. Also, subagents are consisted by micro-agent($\mu$agent) charged for elementary action required. The structure of robot control have two sub-agents, the one is behavior based reactive controller and action selection sub agent, and action selection sub-agent select a action based on the high label action and high performance, and which have a learning mechanism based on the reinforcement learning. For presented robot structure, it is easy to give intelligence to each element of action and a new approach of multi robot control. Presented robot is simulated for two goals: chaotic exploration and obstacle avoidance, and fabricated by using 8bit microcontroller, and experimented.

Behavioral motivation-based Action Selection Mechanism with Bayesian Affordance Models (베이지안 행동유발성 모델을 이용한 행동동기 기반 행동 선택 메커니즘)

  • Lee, Sang-Hyoung;Suh, Il-Hong
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.4
    • /
    • pp.7-16
    • /
    • 2009
  • A robot must be able to generate various skills to achieve given tasks intelligently and reasonably. The robot must first learn affordances to generate the skills. An affordance is defined as qualities of objects or environments that induce actions. Affordances can be usefully used to generate skills. Most tasks require sequential and goal-oriented behaviors. However, it is usually difficult to accomplish such tasks with affordances alone. To accomplish such tasks, a skill is constructed with an affordance and a soft behavioral motivation switch for reflecting goal-oriented elements. A skill calculates a behavioral motivation as a combination of both presently perceived information and goal-oriented elements. Here, a behavioral motivation is the internal condition that activates a goal-oriented behavior. In addition, a robot must be able to execute sequential behaviors. We construct skill networks by using generated skills that make action selection feasible to accomplish a task. A robot can select sequential and a goal-oriented behaviors using the skill network. For this, we will first propose a method for modeling and learning Bayesian networks that are used to generate affordances. To select sequential and goal-oriented behaviors, we construct skills using affordances and soft behavioral motivation switches. We also propose a method to generate the skill networks using the skills to execute given tasks. Finally, we will propose action-selection-mechanism to select sequential and goal-oriented behaviors using the skill network. To demonstrate the validity of our proposed methods, "Searching-for-a-target-object", "Approaching-a-target-object", "Sniffing-a-target-object", and "Kicking-a-target-object" affordances have been learned with GENIBO (pet robot) based on the human teaching method. Some experiments have also been performed with GENIBO using the skills and the skill networks.

A study on environmental adaptation and expansion of intelligent agent (지능형 에이전트의 환경 적응성 및 확장성)

  • Baek, Hae-Jung;Park, Young-Tack
    • The KIPS Transactions:PartB
    • /
    • v.10B no.7
    • /
    • pp.795-802
    • /
    • 2003
  • To live autonomously, intelligent agents such as robots or virtual characters need ability that recognizes given environment, and learns and chooses adaptive actions. So, we propose an action selection/learning mechanism in intelligent agents. The proposed mechanism employs a hybrid system which integrates a behavior-based method using the reinforcement learning and a cognitive-based method using the symbolic learning. The characteristics of our mechanism are as follows. First, because it learns adaptive actions about environment using reinforcement learning, our agents have flexibility about environmental changes. Second, because it learns environmental factors for the agent's goals using inductive machine learning and association rules, the agent learns and selects appropriate actions faster in given surrounding and more efficiently in extended surroundings. Third, in implementing the intelligent agents, we considers only the recognized states which are found by a state detector rather than by all states. Because this method consider only necessary states, we can reduce the space of memory. And because it represents and processes new states dynamically, we can cope with the change of environment spontaneously.

A Mechanism to Derive Optimal Contractor-type & Action Comginations of a Single-source Procurement Contract

  • 정승호
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.24 no.2
    • /
    • pp.41-51
    • /
    • 1999
  • In sole-source procurement contraction for government goods and services, the buyer (government) needs to derive the optimal actions from the contractor so the buyer can obtain the maximum utility and the contractor, or single-source supplier, is guaranteed the equivalent of a minimum level of profit. Under the assumption of risk-neutrality for both the buyer and the contractor and the buyer's unobservability of the contractor's action, it is necessary for the buyer to design a (mathematical) model to achieve the above objective. This paper considers the mathematical formulation in which two problems - moral hazard and adverse selection - are present simultaneously; furthermore, from the formulation, a GAMS (General Algebraic Modeling System) program is used for a possible buyer to obtain the optimal actions.

  • PDF