• Title/Summary/Keyword: Camera Calibration Module

Search Result 15, Processing Time 0.029 seconds

Development of a software based calibration system for automobile assembly system oriented AR (자동차 조립시스템 지향 AR을 위한 소프트웨어 기반의 캘리브레이션 시스템 개발)

  • Park, Jin-Woo;Park, Hong-Seok
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.1
    • /
    • pp.35-44
    • /
    • 2012
  • Many automobile manufacturers are doing experiment on manufacturing environments by using an augmented reality technology. However, system layout and process simulation by using the virtual reality technology have been performed actively more than by using the augmented reality technology in practical use so far. Existing automobile assembly by using the augmented reality requires the precise calibrating work after setting the robot because the existing augmented reality system for the automobile assembly system configuration does not include the end tip deflection and the robot joints deflection due to the heavy weight of product and gripper. Because the robot is used mostly at the automobile assembly, the deflection problem of the robot joint and the product in the existing augmented reality system need to be improved. Moreover camera lens calibration has to be performed precisely to use augmented reality. In order to improve this problem, this paper introduces a method of the software based calibration to apply the augmented reality effectively to the automobile assembly system. On the other hand, the camera lens calibration module and the direct compensation module of the virtual object displacement for the augmented reality were designed and implemented. Furthermore, the developed automobile assembly system oriented AR-system was verified by the practical test.

A Study on the Sensor Calibration for Low Cost Motion Capture Sensor using PSD Sensor (PSD센서를 이용한 모션캡쳐 시스템의 센서보정에 관한 연구)

  • Kim, Yu-Geon;Choi, Hun-Il;Ryu, Young-Kee;Oh, Choon-Suk
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.603-605
    • /
    • 2005
  • In this paper, we deal with a calibration method for low cost motion capture sensor using PSD (Position Sensitive Detection). The PSD sensor is employed to measure the direction of incident light from moving markers attached to motion body. To calibrate the PSD optical module, a conventional camera calibration algorithm introduced by Tsai. The 3-dimensional positions of the markers are measured by using stereo camera geometry. From the experimental results, the low cost motion capture sensor can be used in a real time system.

  • PDF

Vision Inspection Module for Dimensional Measurement in CMM having Vision Probe (비젼프로브를 가지는 3차원 측정기를 위한 형상 측정 시스템 묘듈 개발)

  • 이일환;박희재;김구영
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.379-383
    • /
    • 1995
  • In this paper, vision inspection module for dimensional measurement has been developed. For high accuracy of CMM, camera calibration and edge detection with subpixel accuracy have been implemented. In measurement process, the position of vision probe can be recognized in PC by serial communication with CMM controller. The developed vision inspection module can be widely applied to the practical measurement process.

  • PDF

Design and Implementation Stereo Camera based Twin Camera Module System (스테레오 카메라 기반 트윈 카메라 모듈 시스템 설계 및 구현)

  • Kim, Tae-Yeun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.537-546
    • /
    • 2019
  • The paper actualizes the twin camera module system that is portable and very useful for the production of 3D contents. The suggested twin camera module system is a system to be able to display the 3D image after converting the inputted image from 2D stereo camera. To evaluate the performance of the twin camera module suggested in this paper, I assessed the correction of Rotation and Tilt created depending on the visual difference between the left and right stereoscopic image shot by the left and right lenses by using the Test Platform. In addition, I verified the efficiency of the twin camera module system through verifying Depth Error of 3D stereoscopic image by means of Scale Invariant Feature Transform(SIFT) algorithm. I think that if the user utilizes the suggested twin camera module system in displaying the image to the external after converting the shot image into the 3D stereoscopic image and the preparation image, it is possible to display the image in a matched way with an output device fit respectively for different 3D image production methods and if the user utilizes the system in displaying the created image in the form of the 3D stereoscopic image and the preparation image via different channels, it is possible to produce 3D image contents easily and conveniently with applying to lots of products.

Development of Vision Sensor Module for the Measurement of Welding Profile (용접 형상 측정용 시각 센서 모듈 개발)

  • Kim C.H.;Choi T.Y.;Lee J.J.;Suh J.;Park K.T.;Kang H.S.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.285-286
    • /
    • 2006
  • The essential tasks to operate the welding robot are the acquisition of the position and/or shape of the parent metal. For the seam tracking or the robot automation, many kinds of contact and non-contact sensors are used. Recently, the vision sensor is most popular. In this paper, the development of the system which measures the profile of the welding part is described. The total system will be assembled into a compact module which can be attached to the head of welding robot system. This system uses the line-type structured laser diode and the vision sensor It implemented Direct Linear Transformation (DLT) for the camera calibration as well as radial distortion correction. The three dimensional shape of the parent metal is obtained after simple linear transformation and therefore, the system operates in real time. Some experiments are carried out to evaluate the performance of the developed system.

  • PDF

The Overview of CEU Development for a Payload

  • Kong, Jong-Pil;Heo, Haeng-Pal;Kim, Young-Sun;Park, Jong-Euk;Chang, Young-Jun
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.797-799
    • /
    • 2006
  • The Electro-optical camera subsystem as a payload of a satellite system consists of OM (optical module) and CEU(camera electronics unit), and most performances of the camera subsystem depend a lot on the CEU in which TDI CCDs(Time Delayed Integration Charge Coupled Device) take the main role of imaging by converting the light intensity into measurable voltage signal. Therefore it is required to specify and design the CEU very carefully at the early stage of development with overall specifications, design considerations, calibration definition, test methods for key performance parameters. This paper describes the overview of CEU development. It lists key requirement characteristics of CEU hardware and design considerations. It also describes what kinds of calibration are required for the CEU and defines the test and evaluation conditions in verifying requirement specifications of the CEU, which are used during acceptance test, considering the fact that CEU performance results change a lot depending on test and evaluation conditions such as operational line rate, TDI level, and light intensity level, so on.

  • PDF

Distortion Calibration and Image Analysis of Megapixel Ultrawide-angle Lens (메가픽셀급 초광각 렌즈의 왜곡영상 보정과 화질분석)

  • Kang, Min-Goo;Lee, Jae-Son;Lee, Ou-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.3
    • /
    • pp.597-602
    • /
    • 2013
  • In this paper, the lens module of mega pixel type was designed for barrel distortion calibration due to the barrel distortion of ultra wide angle. And the performance of this camera module was improved with the images from wide dynamic range 2 megapixel CMOS image sensor.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

A Robust Depth Map Upsampling Against Camera Calibration Errors (카메라 보정 오류에 강건한 깊이맵 업샘플링 기술)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.8-17
    • /
    • 2011
  • Recently, fusion camera systems that consist of depth sensors and color cameras have been widely developed with the advent of a new type of sensor, time-of-flight (TOF) depth sensor. The physical limitation of depth sensors usually generates low resolution images compared to corresponding color images. Therefore, the pre-processing module, such as camera calibration, three dimensional warping, and hole filling, is necessary to generate the high resolution depth map that is placed in the image plane of the color image. However, the result of the pre-processing step is usually inaccurate due to errors from the camera calibration and the depth measurement. Therefore, in this paper, we present a depth map upsampling method robust these errors. First, the confidence of the measured depth value is estimated by the interrelation between the color image and the pre-upsampled depth map. Then, the detailed depth map can be generated by the modified kernel regression method which exclude depth values having low confidence. Our proposed algorithm guarantees the high quality result in the presence of the camera calibration errors. Experimental comparison with other data fusion techniques shows the superiority of our proposed method.

Development of Dual-mode Signal Processing Module for Multi-slit Prompt-gamma Camera (다중 슬릿 즉발감마선 카메라를 위한 이중모드 신호처리 모듈 개발)

  • Park, Jong Hoon;Lee, Han Rim;Kim, Sung Hun;Kim, Chan Hyeong;Shin, Dong Ho;Lee, Se Byeong;Jeong, Jonh Hwi
    • Progress in Medical Physics
    • /
    • v.27 no.1
    • /
    • pp.37-45
    • /
    • 2016
  • In proton therapy, in vivo proton beam range verification is very important to deliver conformal dose to the target volume and minimize unnecessary dose to normal tissue. For this purpose, a multi-slit prompt-gamma camera module made of 24 scintillation detectors and 24-channel signal processing system is under development. In the present study, we have developed and tested a dual-mode signal processing system, which can operate in the energy calibration mode and the fast data acquisition mode, to process the signals from the 24 scintillation detectors. As a result of performance test, using the energy calibration mode, we were able to perform energy calibration for the 24 scintillation detectors at the same time and determine the discrimination levels for the detector channels. Further, using the fast data acquisition mode, we were able to measure a prompt-gamma distribution induced by a 45 MeV proton beam. The measured prompt gamma distribution was found similar to the proton dose distribution at the distal fall-off region, and the estimated beam range was $17.13{\pm}0.76mm$, which is close to the proton beam range of 16.15 mm measured by an EBT film.