• Title/Summary/Keyword: Sensor Fusion

Search Result 809, Processing Time 0.025 seconds

FST : Fusion Rate Based Spanning Tree for Wireless Sensor Networks (데이터 퓨전을 위한 무선 센서 네트워크용 스패닝 트리 : FST)

  • Suh, Chang-Jin;Shin, Ji-Soo
    • The KIPS Transactions:PartC
    • /
    • v.16C no.1
    • /
    • pp.83-90
    • /
    • 2009
  • Wireless Sensor Network (WSN) is a wireless network that gathers information from remote area with autonomously configured routing path. We propose a fusion based routing for a 'convergecast' in which all sensors periodically forward collected data to a base station. Previous researches dealt with only full-fusion or no-fusion case. Our Fusion rate based Spanning Tree (FST) can provide effective routing topology in terms of total cost according to all ranges of fusion rate f ($0{\leq}f{\leq}1$). FST is optimum for convergecast in case of no-fusion (f = 0) and full-fusion (f = 1) and outperforms the Shortest Path spanning Tree (SPT) or Minimum Spanning Tree (MST) for any range of f (0 < f < 1). Simulation of 100-node WSN shows that the total length of FST is shorter than MST and SPT nearby 31% and 8% respectively in terms of topology lengths for all range of f. As a result, we confirmed that FST is a very useful WSN topology.

Camera and LiDAR Sensor Fusion for Improving Object Detection (카메라와 라이다의 객체 검출 성능 향상을 위한 Sensor Fusion)

  • Lee, Jongseo;Kim, Mangyu;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.580-591
    • /
    • 2019
  • This paper focuses on to improving object detection performance using the camera and LiDAR on autonomous vehicle platforms by fusing detected objects from individual sensors through a late fusion approach. In the case of object detection using camera sensor, YOLOv3 model was employed as a one-stage detection process. Furthermore, the distance estimation of the detected objects is based on the formulations of Perspective matrix. On the other hand, the object detection using LiDAR is based on K-means clustering method. The camera and LiDAR calibration was carried out by PnP-Ransac in order to calculate the rotation and translation matrix between two sensors. For Sensor fusion, intersection over union(IoU) on the image plane with respective to the distance and angle on world coordinate were estimated. Additionally, all the three attributes i.e; IoU, distance and angle were fused using logistic regression. The performance evaluation in the sensor fusion scenario has shown an effective 5% improvement in object detection performance compared to the usage of single sensor.

Modeling and Design of a Distributed Detection System Based on Active Sonar Sensor Networks (능동 소나망 분산탐지 체계의 모델링 및 설계)

  • Choi, Won-Yong;Kim, Song-Geun;Hong, Sun-Mog
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.14 no.1
    • /
    • pp.123-131
    • /
    • 2011
  • In this paper, modeling and design of a distributed detection system are considered for an active sonar sensor network. The sensor network has a parallel configuration and it consists of a fusion center and a set of receiver nodes. A system with two receiver nodes is considered to investigate a theoretical aspect of design. To be specific, AND rule and OR rule are considered as the fusion rules of the sensor network. For the fusion rules, it is shown that a threshold rule of each sensor node has uniformly most powerful properties. Optimum threshold for each sensor is obtained that maximizes the probability of detection given probability of false alarm. Numerical experiments were also performed to investigate the detection characteristics of a distributed detection system with multiple sensor nodes. The experimental results show how signal strength, false alarm probability, and the distance between nodes in a sensor field affect the system detection performances.

A Novel Clustering Method with Time Interval for Context Inference based on the Multi-sensor Data Fusion (다중센서 데이터융합 기반 상황추론에서 시간경과를 고려한 클러스터링 기법)

  • Ryu, Chang-Keun;Park, Chan-Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.3
    • /
    • pp.397-402
    • /
    • 2013
  • Time variation is the essential component of the context awareness. It is a beneficial way not only including time lapse but also clustering time interval for the context inference using the information from sensor mote. In this study, we proposed a novel way of clustering based multi-sensor data fusion for the context inference. In the time interval, we fused the sensed signal of each time slot, and fused again with the results of th first fusion. We could reach the enhanced context inference with assessing the segmented signal according to the time interval at the Dempster-Shafer evidence theory based multi-sensor data fusion.

Motion Estimation of 3D Planar Objects using Multi-Sensor Data Fusion (센서 융합을 이용한 움직이는 물체의 동작예측에 관한 연구)

  • Yang, Woo-Suk
    • Journal of Sensor Science and Technology
    • /
    • v.5 no.4
    • /
    • pp.57-70
    • /
    • 1996
  • Motion can be estimated continuously from each sensor through the analysis of the instantaneous states of an object. This paper is aimed to introduce a method to estimate the general 3D motion of a planar object from the instantaneous states of an object using multi-sensor data fusion. The instantaneous states of an object is estimated using the linear feedback estimation algorithm. The motion estimated from each sensor is fused to provide more accurate and reliable information about the motion of an unknown planar object. We present a fusion algorithm which combines averaging and deciding. With the assumption that the motion is smooth, the approach can handle the data sequences from multiple sensors with different sampling times. Simulation results show proposed algorithm is advantageous in terms of accuracy, speed, and versatility.

  • PDF

Radar and Vision Sensor Fusion for Primary Vehicle Detection (레이더와 비전센서 융합을 통한 전방 차량 인식 알고리즘 개발)

  • Yang, Seung-Han;Song, Bong-Sob;Um, Jae-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.7
    • /
    • pp.639-645
    • /
    • 2010
  • This paper presents the sensor fusion algorithm that recognizes a primary vehicle by fusing radar and monocular vision data. In general, most of commercial radars may lose tracking of the primary vehicle, i.e., the closest preceding vehicle in the same lane, when it stops or goes with other preceding vehicles in the adjacent lane with similar velocity and range. In order to improve the performance degradation of radar, vehicle detection information from vision sensor and path prediction predicted by ego vehicle sensors will be combined for target classification. Then, the target classification will work with probabilistic association filters to track a primary vehicle. Finally the performance of the proposed sensor fusion algorithm is validated using field test data on highway.

Kalman Filter Baded Pose Data Fusion with Optical Traking System and Inertial Navigation System Networks for Image Guided Surgery (영상유도수술을 위한 광학추적 센서 및 관성항법 센서 네트웍의 칼만필터 기반 자세정보 융합)

  • Oh, Hyun Min;Kim, Min Young
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.1
    • /
    • pp.121-126
    • /
    • 2017
  • Tracking system is essential for Image Guided Surgery(IGS). Optical Tracking System(OTS) is widely used to IGS for its high accuracy and easy usage. However, OTS doesn't work when occlusion of marker occurs. In this paper sensor data fusion with OTS and Inertial Navigation System(INS) is proposed to solve this problem. The proposed system improves the accuracy of tracking system by eliminating gaussian error of the sensor and supplements the disadvantages of OTS and IMU through sensor fusion based on Kalman filter. Also, sensor calibration method that improves the accuracy is introduced. The performed experiment verifies the effectualness of the proposed algorithm.

Localization of Outdoor Wheeled Mobile Robots using Indirect Kalman Filter Based Sensor fusion (간접 칼만 필터 기반의 센서융합을 이용한 실외 주행 이동로봇의 위치 추정)

  • Kwon, Ji-Wook;Park, Mun-Soo;Kim, Tae-Un;Chwa, Dong-Kyoung;Hong, Suk-Kyo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.8
    • /
    • pp.800-808
    • /
    • 2008
  • This paper presents a localization algorithm of the outdoor wheeled mobile robot using the sensor fusion method based on indirect Kalman filter(IKF). The wheeled mobile robot considered with in this paper is approximated to the two wheeled mobile robot. The mobile robot has the IMU and encoder sensor for inertia positioning system and GPS. Because the IMU and encoder sensor have bias errors, divergence of the estimated position from the measured data can occur when the mobile robot moves for a long time. Because of many natural and artificial conditions (i.e. atmosphere or GPS body itself), GPS has the maximum error about $10{\sim}20m$ when the mobile robot moves for a short time. Thus, the fusion algorithm of IMU, encoder sensor and GPS is needed. For the sensor fusion algorithm, we use IKF that estimates the errors of the position of the mobile robot. IKF proposed in this paper can be used other autonomous agents (i.e. UAV, UGV) because IKF in this paper use the position errors of the mobile robot. We can show the stability of the proposed sensor fusion method, using the fact that the covariance of error state of the IKF is bounded. To evaluate the performance of proposed algorithm, simulation and experimental results of IKF for the position(x-axis position, y-axis position, and yaw angle) of the outdoor wheeled mobile robot are presented.

A Study on a Multi-sensor Information Fusion Architecture for Avionics (항공전자 멀티센서 정보 융합 구조 연구)

  • Kang, Shin-Woo;Lee, Seoung-Pil;Park, Jun-Hyeon
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.6
    • /
    • pp.777-784
    • /
    • 2013
  • Synthesis process from the data produced by different types of sensor into a single information is being studied and used in a variety of platforms in terms of multi-sensor data fusion. Heterogeneous sensors has been integrated into various aircraft and modern avionic systems manage them. As the performance of sensors in aircraft is getting higher, the integration of sensor information is required from the viewpoint of avionics gradually. Information fusion is not studied widely in the view of software that provide a pilot with fused information from data produced by the sensor in the form of symbology on a display device. The purpose of information fusion is to assist pilots to make a decision in order to perform mission by providing the correct combat situation from avionics of the aircraft and to minimize their workload consequently. In the aircraft avionics equipped with different types of sensors, the software architecture that produce a comprehensive information using the sensor data through multi-sensor data fusion process to the user is shown in this paper.

Cylindrical Object Recognition using Sensor Data Fusion (센서데이터 융합을 이용한 원주형 물체인식)

  • Kim, Dong-Gi;Yun, Gwang-Ik;Yun, Ji-Seop;Gang, Lee-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.8
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF