• Title/Summary/Keyword: Sensor Fusion

Search Result 806, Processing Time 0.035 seconds

Building DSMs Generation Integrating Three Line Scanner (TLS) and LiDAR

  • Suh, Yong-Cheol;Nakagawa , Masafumi
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.229-242
    • /
    • 2005
  • Photogrammetry is a current method of GIS data acquisition. However, as a matter of fact, a large manpower and expenditure for making detailed 3D spatial information is required especially in urban areas where various buildings exist. There are no photogrammetric systems which can automate a process of spatial information acquisition completely. On the other hand, LiDAR has high potential of automating 3D spatial data acquisition because it can directly measure 3D coordinates of objects, but it is rather difficult to recognize the object with only LiDAR data, for its low resolution at this moment. With this background, we believe that it is very advantageous to integrate LiDAR data and stereo CCD images for more efficient and automated acquisition of the 3D spatial data with higher resolution. In this research, the automatic urban object recognition methodology was proposed by integrating ultra highresolution stereo images and LiDAR data. Moreover, a method to enable more reliable and detailed stereo matching method for CCD images was examined by using LiDAR data as an initial 3D data to determine the search range and to detect possibility of occlusions. Finally, intellectual DSMs, which were identified urban features with high resolution, were generated with high speed processing.

Forest Fire Damage Assessment Using UAV Images: A Case Study on Goseong-Sokcho Forest Fire in 2019

  • Yeom, Junho;Han, Youkyung;Kim, Taeheon;Kim, Yongmin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.5
    • /
    • pp.351-357
    • /
    • 2019
  • UAV (Unmanned Aerial Vehicle) images can be exploited for rapid forest fire damage assessment by virtue of UAV systems' advantages. In 2019, catastrophic forest fire occurred in Goseong and Sokcho, Korea and burned 1,757 hectares of forests. We visited the town in Goseong where suffered the most severe damage and conducted UAV flights for forest fire damage assessment. In this study, economic and rapid damage assessment method for forest fire has been proposed using UAV systems equipped with only a RGB sensor. First, forest masking was performed using automatic elevation thresholding to extract forest area. Then ExG (Excess Green) vegetation index which can be calculated without near-infrared band was adopted to extract damaged forests. In addition, entropy filtering was applied to ExG for better differentiation between damaged and non-damaged forest. We could confirm that the proposed forest masking can screen out non-forest land covers such as bare soil, agriculture lands, and artificial objects. In addition, entropy filtering enhanced the ExG homogeneity difference between damaged and non-damaged forests. The automatically detected damaged forests of the proposed method showed high accuracy of 87%.

Common Optical System for the Fusion of Three-dimensional Images and Infrared Images

  • Kim, Duck-Lae;Jung, Bo Hee;Kong, Hyun-Bae;Ok, Chang-Min;Lee, Seung-Tae
    • Current Optics and Photonics
    • /
    • v.3 no.1
    • /
    • pp.8-15
    • /
    • 2019
  • We describe a common optical system that merges a LADAR system, which generates a point cloud, and a more traditional imaging system operating in the LWIR, which generates image data. The optimum diameter of the entrance pupil was determined by analysis of detection ranges of the LADAR sensor, and the result was applied to design a common optical system using LADAR sensors and LWIR sensors; the performance of these sensors was then evaluated. The minimum detectable signal of the $128{\times}128-pixel$ LADAR detector was calculated as 20.5 nW. The detection range of the LADAR optical system was calculated to be 1,000 m, and according to the results, the optimum diameter of the entrance pupil was determined to be 15.7 cm. The modulation transfer function (MTF) in relation to the diffraction limit of the designed common optical system was analyzed and, according to the results, the MTF of the LADAR optical system was 98.8% at the spatial frequency of 5 cycles per millimeter, while that of the LWIR optical system was 92.4% at the spatial frequency of 29 cycles per millimeter. The detection, recognition, and identification distances of the LWIR optical system were determined to be 5.12, 2.82, and 1.96 km, respectively.

Wearable Based User Danger Situation Discerning System (웨어러블 기반 사용자 위험상황 식별 시스템)

  • Yu, Dong-Gyun;Hwang, Jong-Sun;Kim, Han-Kil;Kim, Han-Kyung;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.792-793
    • /
    • 2016
  • Recent studies on a fusion of health care system and the information and communication technology of wearable is being developed Anytime and anywhere without being constrained to measure the biological information of the user. However existing wearable monitors the measured biological information. If the user is hard to deal with for the event of dangerous situations. In this paper, it proposes a system that identifies the status of a user to correct the problem it utilizes sensors and algorithms to measure the biological information. This enables the user will be able to respond quickly to dangerous situations. In the event of a dangerous situation, such as falling or stumbling sends an emergency alert to a designated guardian.

  • PDF

Autonomous Mobile Robot Using Sensor Fusion (센서융합을 이용한 이동로봇의 자율주행)

  • Shin, Seonwoong;Oh, Seyeop;Yoo, Dongsang;Moon, Hyeonjoon;Kim, Sanghoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.867-868
    • /
    • 2013
  • 본 논문은 물류창고와 같은 실내 공간에서 RFID와 초음파 센서등을 이용하여 이동로봇이 자율적으로 자신의 위치를 파악하고 관리자가 지정한 목표 물체를 인식하여 간단한 업무를 보조할 수 있는 기법을 제안한다. 실내공간엔 RFID를 지면과 목표물체에 설치하고 로봇은 RFID의 리더기와 물체 접근시 활용이 가능한 추가적 센서를 갖춤으로써 이동시 자기 위치를 실시간으로 파악하고 물체로부터도 고유정보를 얻는다. 초음파 센서 신호의 귀환시간을 활용하여 근접한 물체와의 상대 거리를 추출하고 바닥의 RFID로부터 이미 획득한 자기 위치를 조합하여 목표 물체의 절대 위치를 구한다. 이는 이동 로봇을 중심으로 한 경로지도를 실시간으로 작성하며 동시에 실내의 이동 가능 구조 및 목표 물체의 파악이 가능하여 이동로봇의 자율적 탐색을 위한 최적 경로 계획 수립에 활용 가능하다.

A Study on the Training Methodology of Combining Infrared Image Data for Improving Place Classification Accuracy of Military Robots (군 로봇의 장소 분류 정확도 향상을 위한 적외선 이미지 데이터 결합 학습 방법 연구)

  • Donggyu Choi;Seungwon Do;Chang-eun Lee
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.293-298
    • /
    • 2023
  • The military is facing a continuous decrease in personnel, and in order to cope with potential accidents and challenges in operations, efforts are being made to reduce the direct involvement of personnel by utilizing the latest technologies. Recently, the use of various sensors related to Manned-Unmanned Teaming and artificial intelligence technologies has gained attention, emphasizing the need for flexible utilization methods. In this paper, we propose four dataset construction methods that can be used for effective training of robots that can be deployed in military operations, utilizing not only RGB image data but also data acquired from IR image sensors. Since there is no publicly available dataset that combines RGB and IR image data, we directly acquired the dataset within buildings. The input values were constructed by combining RGB and IR image sensor data, taking into account the field of view, resolution, and channel values of both sensors. We compared the proposed method with conventional RGB image data classification training using the same learning model. By employing the proposed image data fusion method, we observed improved stability in training loss and approximately 3% higher accuracy.

Generation of Super-Resolution Benchmark Dataset for Compact Advanced Satellite 500 Imagery and Proof of Concept Results

  • Yonghyun Kim;Jisang Park;Daesub Yoon
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.4
    • /
    • pp.459-466
    • /
    • 2023
  • In the last decade, artificial intelligence's dramatic advancement with the development of various deep learning techniques has significantly contributed to remote sensing fields and satellite image applications. Among many prominent areas, super-resolution research has seen substantial growth with the release of several benchmark datasets and the rise of generative adversarial network-based studies. However, most previously published remote sensing benchmark datasets represent spatial resolution within approximately 10 meters, imposing limitations when directly applying for super-resolution of small objects with cm unit spatial resolution. Furthermore, if the dataset lacks a global spatial distribution and is specialized in particular land covers, the consequent lack of feature diversity can directly impact the quantitative performance and prevent the formation of robust foundation models. To overcome these issues, this paper proposes a method to generate benchmark datasets by simulating the modulation transfer functions of the sensor. The proposed approach leverages the simulation method with a solid theoretical foundation, notably recognized in image fusion. Additionally, the generated benchmark dataset is applied to state-of-the-art super-resolution base models for quantitative and visual analysis and discusses the shortcomings of the existing datasets. Through these efforts, we anticipate that the proposed benchmark dataset will facilitate various super-resolution research shortly in Korea.

Application of Internet of Things Based Monitoring System for indoor Ganoderma Lucidum Cultivation

  • Quoc Cuong Nguyen;Hoang Tan Huynh;Tuong So Dao;HyukDong Kwon
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.153-158
    • /
    • 2023
  • Most agriculture plantings are based on traditional farming and demand a lot of human work processes. In order to improve the efficiency as well as the productivity of their farms, modern agricultural technology was proven to be better than traditional practices. Internet of Things (IoT) is usually related in modern agriculture which provides the farmer with a real-time monitoring condition of their farm from anywhere and anytime. Therefore, the application of IoT with a sensor to measure and monitors the humidity and the temperature in the mushroom farm that can overcome this problem. This paper proposes an IoT based monitoring system forindoor Ganoderma lucidum cultivation at a minimal cost in terms of hardware resources and practicality. The results show that the data of temperature and humidity are changing depending on the weather and the preliminary experimental results demonstrated that all parameters of the system were optimized and successful to achieve the objective. In addition, the analysis results show that the quality of Ganoderma lucidum produced on the research method conforms to regulations in Vietnam.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Real-time Speed Sign Recognition Method Using Virtual Environments and Camera Images (가상환경 및 카메라 이미지를 활용한 실시간 속도 표지판 인식 방법)

  • Eunji Song;Taeyun Kim;Hyobin Kim;Kyung-Ho Kim;Sung-Ho Hwang
    • Journal of Drive and Control
    • /
    • v.20 no.4
    • /
    • pp.92-99
    • /
    • 2023
  • Autonomous vehicles should recognize and respond to the specified speed to drive in compliance with regulations. To recognize the specified speed, the most representative method is to read the numbers of the signs by recognizing the speed signs in the front camera image. This study proposes a method that utilizes YOLO-Labeling-Labeling-EfficientNet. The sign box is first recognized with YOLO, and the numeric digit is extracted according to the pixel value from the recognized box through two labeling stages. After that, the number of each digit is recognized using EfficientNet (CNN) learned with the virtual environment dataset produced directly. In addition, we estimated the depth of information from the height value of the recognized sign through regression analysis. We verified the proposed algorithm using the virtual racing environment and GTSRB, and proved its real-time performance and efficient recognition performance.