• Title/Summary/Keyword: Multi-user Multi-robot

Search Result 46, Processing Time 0.031 seconds

A Multi-Modal Complex Motion Authoring Tool for Creating Robot Contents

  • Seok, Kwang-Ho;Kim, Yoon-Sang
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.6
    • /
    • pp.924-932
    • /
    • 2010
  • This paper proposes a multi-modal complex motion authoring tool for creating robot contents. The proposed tool is user-friendly and allows general users without much knowledge about robots, including children, women and the elderly, to easily edit and modify robot contents. Furthermore, the tool uses multi-modal data including graphic motion, voice and music to simulate user-created robot contents in the 3D virtual environment. This allows the user to not only view the authoring process in real time but also transmit the final authored contents to control the robot. The validity of the proposed tool was examined based on simulations using the authored multi-modal complex motion robot contents as well as experiments of actual robot motions.

Design of network for data interaction between Robot Agents in Multi Agent Robot System (MARS) (Multi Agent Robot System(MARS)의 Robot Agent 간 정보교환을 위한 네트워크 프로그램 구현)

  • Ko, Kwang-Eun;Lee, Jeong-Soo;Jang, In-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.712-717
    • /
    • 2007
  • Using home network system including home server, home service robot, a variety of device, it is generally known that application of Multi Agent System for performing variously distributed process that can be occur in home environment, is efficient method. In this system, it is intelligent service robot that a key of human interface and physical service offer Therefore, using application of established multi agent system, we can defined Multi Agent Robot System. In 'open' home environment, between all agent data interaction and cooperation are needed for Multi Agent System offer to user that more efficient service. For this, we focus our attention on define as agent that can autonomic drive and offer to user that physical service robots and, design, suggest the simulator can display that between robot agents communication or between other agents, like home server, and robot agents communication information to user interface.

Teleloperation of Field Mobile Manipulator with Wearable Haptic-based Multi-Modal User Interface and Its Application to Explosive Ordnance Disposal

  • Ryu Dongseok;Hwang Chang-Soon;Kang Sungchul;Kim Munsang;Song Jae-Bok
    • Journal of Mechanical Science and Technology
    • /
    • v.19 no.10
    • /
    • pp.1864-1874
    • /
    • 2005
  • This paper describes a wearable multi-modal user interface design and its implementation for a teleoperated field robot system. Recently some teleoperated field robots are employed for hazard environment applications (e.g. rescue, explosive ordnance disposal, security). To complete these missions in outdoor environment, the robot system must have appropriate functions, accuracy and reliability. However, the more functions it has, the more difficulties occur in operation of the functions. To cope up with this problem, an effective user interface should be developed. Furthermore, the user interface is needed to be wearable for portability and prompt action. This research starts at the question: how to teleoperate the complicated slave robot easily. The main challenge is to make a simple and intuitive user interface with a wearable shape and size. This research provides multi-modalities such as visual, auditory and haptic sense. It enables an operator to control every functions of a field robot more intuitively. As a result, an EOD (explosive ordnance disposal) demonstration is conducted to verify the validity of the proposed wearable multi-modal user interface.

Auto Sequencing User Interface for Mobile Robot Using Multi Sensor System (다중 센서 시스템을 이용한 이동로봇의 자동-절환 사용자 인터페이스)

  • Song, Tae-Houn;Park, Ji-Hwan;Park, Jong-Hyun;Jung, Soon-Mook;Hong, Soon-Hyuk;Kim, Gi-Oh;Jeon, Jae-Wook
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.319-325
    • /
    • 2008
  • In this paper, we develop the multi sensor system, to get the sufficient information of mobile robot's environment. Mobile robot user interface, based on multi sensor system, can choice a suitable sensor by low-cost multi sensors and then acquisition information from remote robot's workspace using auto sequencing user display function. This research of multi sensor system is consists of ultrasonic sensor, position sensing detector, and low-cost CMOS camera module.

  • PDF

A Study on Education Software for Controling of Multi-Joint Robot (다관절 로봇 제어를 위한 교육용 소프트웨어 연구)

  • Kim, Jae-Soo;Son, Hyun-Seung;Kim, Woo-Yeol;Kim, Young-Chul
    • Journal of The Korean Association of Information Education
    • /
    • v.12 no.4
    • /
    • pp.469-476
    • /
    • 2008
  • To enhance the educational effect of Multi-Joint Robot have to easily develop motion through the control software. The traditional way of development technique for multi-joint robot is educated with very complicated implementation, but our motion creation tool can be possible to do the creative activity for controling robot movements with ease. This paper mentions to develop the motion creation tool for easily and quickly programming the motion control of multi-joint robot on the educational program. With this tool we easily and exactly provide the education of robot program. In this paper, our suggested tool could not only evade the traditional way of a complicated control program using programming languages but also control easier the robot than the GUI(Graphic User Interface) programming centered on the user's convenience. Additionally, the robot motion's implementation is possible applied with microprocessor experimental equipment educationally to practical use.

  • PDF

User-Oriented Controller Design for Multi-Axis Manipulators (다관절 머니퓰레이터의 사용자 중심 제어기 설계)

  • Son, HeonSuk;Kang, DaeHoon;Lee, JangMyung
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.49-56
    • /
    • 2008
  • This paper proposes a PC-based open architecture controller for a multi-axis robotic manipulator. The designed controller can be applied for various multi-axes robotic manipulators since the motion controller is implemented on a PC with its peripheral devices. The accuracy of the controller based on the computed torque method has been measured with the dynamic model of manipulator. Since the controller is implemented in the PC-based architecture, it is free from the user circumstances and the operating environment. Dynamics of the manipulator have been compensated by the feed forward path in the inner loop and the resulting linear outer loop has been controlled by PD algorithm. Using the specialized language, it can be more efficient in programming and in driving of the multi-axis robot. Unlike the conventional controller that is used to control only a specific robot, this controller can be easily changed for various types of robots. This paper proposes a PC-based controller that has a simple architecture with its simple interface circuits than general commercial controllers. The maintenance and the performance of the controller can be easily improved for a specific robot. In fact, using a Samsung multi-axis robot, AT1, the controller performance and convenience of the PC-based controller have been verified by comparing to the commercial one.

  • PDF

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

Behavior Realization of Multi-Robots Responding to User's Input Characters (사용자 입력 문자에 반응하는 군집 로봇 행동 구현)

  • Jo, Young-Rae;Lee, Kil-Ho;Jo, Sung-Ho;Shin, In-Sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.5
    • /
    • pp.419-425
    • /
    • 2012
  • This paper presents an approach to implement the behaviors of multi-robots responding to user's input characters. The robots are appropriately displaced to express any input characters. Using our method, any user can easily and friendly control multirobots. The responses of the robots to the user's input are intuitive. We utilize the centroidal Voronoi algorithm and the continuoustime Lloyd algorithm, which have popularly been used for the optimal sensing coverage problems. Collision protection is considered to be applied for real robots. LED sensors are used to identify positions of multi-robots. Our approach is evaluated through experiments with five mobile robots. When a user draw alphabets, the robots are deployed correspondingly. By checking position errors, the feasibility of our method is validated.

A Study on the Intention to Use a Robot-based Learning System with Multi-Modal Interaction (멀티모달 상호작용 중심의 로봇기반교육 콘텐츠를 활용한 r-러닝 시스템 사용의도 분석)

  • Oh, Junseok;Cho, Hye-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.619-624
    • /
    • 2014
  • This paper introduces a robot-based learning system which is designed to teach multiplication to children. In addition to a small humanoid and a smart device delivering educational content, we employ a type of mixed-initiative operation which provides enhanced multi-modal cognition to the r-learning system through human intervention. To investigate major factors that influence people's intention to use the r-learning system and to see how the multi-modality affects the connections, we performed a user study based on TAM (Technology Acceptance Model). The results support the fact that the quality of the system and the natural interaction are key factors for the r-learning system to be used, and they also reveal very interesting implications related to the human behaviors.