• Title/Summary/Keyword: Multiple agent

Search Result 477, Processing Time 0.027 seconds

Multiple Behavior s Learning and Prediction in Unknown Environment

  • Song, Wei;Cho, Kyung-Eun;Um, Ky-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.12
    • /
    • pp.1820-1831
    • /
    • 2010
  • When interacting with unknown environments, an autonomous agent needs to decide which action or action order can result in a good state and determine the transition probability based on the current state and the action taken. The traditional multiple sequential learning model requires predefined probability of the states' transition. This paper proposes a multiple sequential learning and prediction system with definition of autonomous states to enhance the automatic performance of existing AI algorithms. In sequence learning process, the sensed states are classified into several group by a set of proposed motivation filters to reduce the learning computation. In prediction process, the learning agent makes a decision based on the estimation of each state's cost to get a high payoff from the given environment. The proposed learning and prediction algorithms heightens the automatic planning of the autonomous agent for interacting with the dynamic unknown environment. This model was tested in a virtual library.

Distributed Information Extraction in Wireless Sensor Networks using Multiple Software Agents with Dynamic Itineraries

  • Gupta, Govind P.;Misra, Manoj;Garg, Kumkum
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.1
    • /
    • pp.123-144
    • /
    • 2014
  • Wireless sensor networks are generally deployed for specific applications to accomplish certain objectives over a period of time. To fulfill these objectives, it is crucial that the sensor network continues to function for a long time, even if some of its nodes become faulty. Energy efficiency and fault tolerance are undoubtedly the most crucial requirements for the design of an information extraction protocol for any sensor network application. However, most existing software agent based information extraction protocols are incapable of satisfying these requirements because of static agent itineraries and large agent sizes. This paper proposes an Information Extraction protocol based on Multiple software Agents with Dynamic Itineraries (IEMADI), where multiple software agents are dispatched in parallel to perform tasks based on the query assigned to them. IEMADI decides the itinerary for an agent dynamically at each hop using local information. Through mathematical analysis and simulation, we compare the performance of IEMADI with a well known static itinerary based protocol with respect to energy consumption and response time. The results show that IEMADI provides better performance than the static itinerary based protocols.

Agent Orange Exposure and Prevalence of Self-reported Diseases in Korean Vietnam Veterans

  • Yi, Sang-Wook;Ohrr, Heechoul;Hong, Jae-Seok;Yi, Jee-Jeon
    • Journal of Preventive Medicine and Public Health
    • /
    • v.46 no.5
    • /
    • pp.213-225
    • /
    • 2013
  • Objectives: The aim of this study was to evaluate the association between Agent Orange exposure and self-reported diseases in Korean Vietnam veterans. Methods: A postal survey of 114 562 Vietnam veterans was conducted. The perceived exposure to Agent Orange was assessed by a 6-item questionnaire. Two proximity-based Agent Orange exposure indices were constructed using division/brigade-level and battalion/ company-level unit information. Adjusted odds ratios (ORs) for age and other confounders were calculated using a logistic regression model. Results: The prevalence of all self-reported diseases showed monotonically increasing trends as the levels of perceived self-reported exposure increased. The ORs for colon cancer (OR, 1.13), leukemia (OR, 1.56), hypertension (OR, 1.03), peripheral vasculopathy (OR, 1.07), enterocolitis (OR, 1.07), peripheral neuropathy (OR, 1.07), multiple nerve palsy (OR, 1.14), multiple sclerosis (OR, 1.24), skin diseases (OR, 1.05), psychotic diseases (OR, 1.07) and lipidemia (OR, 1.05) were significantly elevated for the high exposure group in the division/brigade-level proximity-based exposure analysis, compared to the low exposure group. The ORs for cerebral infarction (OR, 1.08), chronic bronchitis (OR, 1.05), multiple nerve palsy (OR, 1.07), multiple sclerosis (OR, 1.16), skin diseases (OR, 1.05), and lipidemia (OR, 1.05) were significantly elevated for the high exposure group in the battalion/company-level analysis. Conclusions: Korean Vietnam veterans with high exposure to Agent Orange experienced a higher prevalence of several self-reported chronic diseases compared to those with low exposure by proximity-based exposure assessment. The strong positive associations between perceived self-reported exposure and all self-reported diseases should be evaluated with discretion because the likelihood of reporting diseases was directly related to the perceived intensity of Agent Orange exposure.

A Negotiation Framework for the Cloud Management System using Similarity and Gale Shapely Stable Matching approach

  • Rajavel, Rajkumar;Thangarathinam, Mala
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.6
    • /
    • pp.2050-2077
    • /
    • 2015
  • One of the major issues in emerging cloud management system needs the efficient service level agreement negotiation framework, with an optimal negotiation strategy. Most researchers focus mainly on the atomic service negotiation model, with the assistance of the Agent Controller in the broker part to reduce the total negotiation time, and communication overhead to some extent. This research focuses mainly on composite service negotiation, to further minimize both the total negotiation time and communication overhead through the pre-request optimization of broker strategy. The main objective of this research work is to introduce an Automated Dynamic Service Level Agreement Negotiation Framework (ADSLANF), which consists of an Intelligent Third-party Broker for composite service negotiation between the consumer and the service provider. A broker consists of an Intelligent Third-party Broker Agent, Agent Controller and Additional Agent Controller for managing and controlling its negotiation strategy. The Intelligent third-party broker agent manages the composite service by assigning its atomic services to multiple Agent Controllers. Using the Additional Agent Controllers, the Agent Controllers manage the concurrent negotiation with multiple service providers. In this process, the total negotiation time value is reduced partially. Further, the negotiation strategy is optimized in two stages, viz., Classified Similarity Matching (CSM) approach, and the Truncated Negotiation Group Gale Shapely Stable Matching (TNGGSSM) approach, to minimize the communication overhead.

Autonomous and Asynchronous Triggered Agent Exploratory Path-planning Via a Terrain Clutter-index using Reinforcement Learning

  • Kim, Min-Suk;Kim, Hwankuk
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.181-188
    • /
    • 2022
  • An intelligent distributed multi-agent system (IDMS) using reinforcement learning (RL) is a challenging and intricate problem in which single or multiple agent(s) aim to achieve their specific goals (sub-goal and final goal), where they move their states in a complex and cluttered environment. The environment provided by the IDMS provides a cumulative optimal reward for each action based on the policy of the learning process. Most actions involve interacting with a given IDMS environment; therefore, it can provide the following elements: a starting agent state, multiple obstacles, agent goals, and a cluttered index. The reward in the environment is also reflected by RL-based agents, in which agents can move randomly or intelligently to reach their respective goals, to improve the agent learning performance. We extend different cases of intelligent multi-agent systems from our previous works: (a) a proposed environment-clutter-based-index for agent sub-goal selection and analysis of its effect, and (b) a newly proposed RL reward scheme based on the environmental clutter-index to identify and analyze the prerequisites and conditions for improving the overall system.

Agent-Oriented Fuzzy Traffic Control Simulation

  • Kim, Jong-Wan;Lee, Seunga;Kim, Youngsoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.6
    • /
    • pp.584-590
    • /
    • 2000
  • Urban traffic situations are extremely complex and highly interactive. The multi-agent systems approach can provide a new desirable solution. Currently, a traffic simulator is needed to understand and explore the difficulties in an agent-oriented traffic control. This paper presents an agent-oriented fuzzy logic controller for multiple crossroads simulation. A fuzzy logic control simulation with variables of arrival, queue, and traffic volume could alleviate traffic congestion. We developed an agent-oriented simulator suitable for traffic junctions with η$\times$η intersections in Visual C++. The proposed method adaptively controls the cycle of traffic signals even though the traffic volume varies. The effectiveness of this method was shown through simulation of multiple intersections.

  • PDF

Action Selection of Multi-Agent by dynamic coordination graph and MAX-PLUS algorithm for Multi-Task Completion (멀티 태스크 수행을 위한 멀티에이전트의 동적 협력그래프 생성과 MAX-PLUS 방법을 통한 행동결정)

  • Kim, Jeong-Kuk;Im, Gi-Hyeon;Lee, Sang-Hun;Seo, Il-Hong
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.925-926
    • /
    • 2006
  • In the multi-agent system for a single task, the action selection can be made for the real-time environment by using the global coordination space, global coordination graph and MAX-PLUS algorithm. However, there are some difficulties in multi-agent system for multi-tasking. In this paper, a real-time decision making method is suggested by using coordination space, coordination graph and dynamic coordinated state of multi-agent system including many agents and multiple tasks. Specifically, we propose locally dynamic coordinated state to effectively use MAX-PLUS algorithm for multiple tasks completion. Our technique is shown to be valid in the box pushing simulation of a multi-agent system.

  • PDF

Motivation based Behavior Sequence Learning for an Autonomous Agent in Virtual Reality

  • Song, Wei;Cho, Kyung-Eun;Um, Ky-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.12
    • /
    • pp.1819-1826
    • /
    • 2009
  • To enhance the automatic performance of existing predicting and planning algorithms that require a predefined probability of the states' transition, this paper proposes a multiple sequence generation system. When interacting with unknown environments, a virtual agent needs to decide which action or action order can result in a good state and determine the transition probability based on the current state and the action taken. We describe a sequential behavior generation method motivated from the change in the agent's state in order to help the virtual agent learn how to adapt to unknown environments. In a sequence learning process, the sensed states are grouped by a set of proposed motivation filters in order to reduce the learning computation of the large state space. In order to accomplish a goal with a high payoff, the learning agent makes a decision based on the observation of states' transitions. The proposed multiple sequence behaviors generation system increases the complexity and heightens the automatic planning of the virtual agent for interacting with the dynamic unknown environment. This model was tested in a virtual library to elucidate the process of the system.

  • PDF

Agent Communication with Multiple Ontologies (다중온톨로지의 에이전트 통신)

  • 임동주;오창윤;배상현
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.1
    • /
    • pp.173-182
    • /
    • 2001
  • In this paper, we discuss how ontology Plays roles in building a distributed and heterogeneous knowledge-base system. First, we discuss relationship between ontology and agent in the Knowledgeable Community which is a framework of knowledge sharing and reuse based on a multi-agent architecture. Ontology is a minimum requirement for each agent to join the Knowledgeable Community. Second we explain mediation by ontology to show how ontology is used in the Knowledgeable Community. A special agent called mediation analyzes undirected messages and infer candidates of recipient agents by consulting ontology and relationship between ontology and agents. Third we model ontology as combination of aspects each of which can represent a way of conceptualization. Aspects are combined either as combination aspect which means integration of aspects or category aspect which means choice of aspects. Since ontology by aspect allows heterogeneous and multiple descriptions for phenomenon in the world, it is appropriate for heterogeneous knowledge-base systems. We also show translation of messages as a wave of interpreting multiple aspects. A translation agent can translate a message with some aspect to one with another aspect by analyzing dependency of aspects. Mediation and translation of messages are important to build agents easily and naturally because less knowledge on other agents is requested for each agent.

  • PDF

Reinforcement learning multi-agent using unsupervised learning in a distributed cloud environment

  • Gu, Seo-Yeon;Moon, Seok-Jae;Park, Byung-Joon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.192-198
    • /
    • 2022
  • Companies are building and utilizing their own data analysis systems according to business characteristics in the distributed cloud. However, as businesses and data types become more complex and diverse, the demand for more efficient analytics has increased. In response to these demands, in this paper, we propose an unsupervised learning-based data analysis agent to which reinforcement learning is applied for effective data analysis. The proposal agent consists of reinforcement learning processing manager and unsupervised learning manager modules. These two modules configure an agent with k-means clustering on multiple nodes and then perform distributed training on multiple data sets. This enables data analysis in a relatively short time compared to conventional systems that perform analysis of large-scale data in one batch.