• 제목/요약/키워드: ToF Camera

검색결과 218건 처리시간 0.03초

ToF 카메라를 이용한 제스처 정보의 추출 및 전송 (Extraction and Transfer of Gesture Information using ToF Camera)

  • 박원창;류대현;최태완
    • 한국전자통신학회논문지
    • /
    • 제9권10호
    • /
    • pp.1103-1109
    • /
    • 2014
  • 최근의 CCTV 카메라는 많은 경우 네트워크 카메라이며, 고화질 영상을 인터넷으로 전송하는 경우 큰 부하가 될 수 있다. 본 연구에서는 특정 환경에서 Kinect와 같은 ToF 카메라를 이용하여 제스처 정보를 추출하고 전송하는 방법을 이용하여 트래픽을 감소시킬 수 있는 방법을 제안하고 그 성능을 평가하였다. 제안된 방식은 ToF 카메라의 성능에 의존하므로 응용 분야에 제약이 있을 수 있지만 가정이나 사무실과 같은 소규모 실내공간의 보안 또는 안전 관리에 효율적으로 활용될 수 있다.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제6권3호
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템 (Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network)

  • 김동엽;이재민;전세웅
    • 로봇학회논문지
    • /
    • 제13권4호
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

Design of Hardware Interface for the Otto Struve 2.1m Telescope

  • Oh, Hee-Young;Park, Won-Kee;choi, Chang-Su;Kim, Eun-Bin;Nguyen, Huynh Anh Le;Lim, Ju-Hee;Jeong, Hyeon-Ju;Pak, Soo-Jong;Im, Myung-Shin
    • 한국우주과학회:학술대회논문집(한국우주과학회보)
    • /
    • 한국우주과학회 2009년도 한국우주과학회보 제18권2호
    • /
    • pp.25.3-25.3
    • /
    • 2009
  • To search for the quasars at z > 7 in early universe, we are developing a optical camera which has a $1k\times1k$ deep depletion CCD chip, with later planned upgrade to HAWAII-2RG infrared array. We are going to attach the camera to the cassegrain focus of Otto Struve 2.1m telescope at McDonald observatory of University of Texas at Austin, USA. We present the design of a hardware interface to attach the CCD camera to the telescope. It consists of focal reducer, filter wheel, and guiding camera. Focal reducer is needed to reduce the long f-ratio (f/13.7) down to about 4 for wide field of view. The guiding camera design is based on that of DIAFI offset guider which developed for the McDonald 2.7m telescope.

  • PDF

TOF 깊이 카메라와 DSLR을 이용한 복합형 카메라 시스템 구성 방법 (Hybrid Camera System with a TOF and DSLR Cameras)

  • 김수현;김재인;김태정
    • 방송공학회논문지
    • /
    • 제19권4호
    • /
    • pp.533-546
    • /
    • 2014
  • 본 논문은 Time-of-Flight(ToF) 깊이 카메라와 DSLR을 이용한 사진측량 기반의 복합형 카메라시스템 구성방법을 제안한다. ToF 깊이 카메라는 깊이 정보를 실시간으로 출력하는 장점이 있지만 제공되는 명암 영상의 해상도가 낮고 획득한 깊이 정보가 물체의 표면상태에 민감하여 잡음이 발생하는 단점이 있다. 따라서 깊이 카메라를 이용한 입체 모델 생성을 위해선 깊이 정보의 보정과 함께 고해상도 텍스처맵을 제공하는 복합형 카메라의 구성이 필요하다. 이를 위해 본 논문은 상대표정을 수행하여 깊이 카메라와 DSLR의 상대적인 기하관계를 추정하고 공선조건식 기반의 역투영식을 이용하여 텍스처매핑을 수행한다. 성능검증을 위해 기존 기법의 모델 정확도와 텍스처매핑 정확도를 비교 분석한다. 실험결과는 제안 기법의 모델 정확도가 더 높았는데 이는 기존 기법이 깊이 카메라의 잡음이 있는 3차원 정보를 기준점으로 사용하여 절대표정을 수행한 반면에 제안 기법은 오차정보가 없는 두 영상간의 공액점을 이용했기 때문이다.

Design and Performance Verification of a LWIR Zoom Camera for Drones

  • Kwang-Woo Park;Jonghwa Choi;Jian Kang
    • Current Optics and Photonics
    • /
    • 제7권4호
    • /
    • pp.354-361
    • /
    • 2023
  • We present the optical design and experimental verification of resolving performance of a 3× long wavelength infrared (LWIR) zoom camera for drones. The effective focal length of the system varies from 24.5 mm at the wide angle position to 75.1 mm at the telephoto position. The design specifications of the system were derived from ground resolved distance (GRD) to recognize 3 m × 6 m target at a distance of 1 km, at the telephoto position. To satisfy the system requirement, the aperture (f-number) of the system is taken as F/1.6 and the final modulation transfer function (MTF) should be higher than 0.1 (10%). The measured MTF in the laboratory was 0.127 (12.7%), exceeds the system requirement. Outdoor targets were used to verify the comprehensive performance of the system. The system resolved 4-bar targets corresponding to the spatial resolution at the distance of 1 km, 1.4 km and 2 km.

프로판 예혼합화염의 소음발생 매커니즘에 관한 실험적 연구 (An Experimental Study on the Noise Generation Mechanisms of Propane Premixed Flames)

  • 이원남;박동수
    • 한국연소학회:학술대회논문집
    • /
    • 한국연소학회 2004년도 제28회 KOSCO SYMPOSIUM 논문집
    • /
    • pp.27-33
    • /
    • 2004
  • The Noise generation mechanisms of propane laminar premixed flames on a slot burner have been studied experimentally. The sound levels and frequencies were measured for various mixture flow rates (velocities) and equivalence ratios. The primary frequency of self-induced noise increases with the mean velocity of mixture as $f{\;}{\propto}{\;}U_f^{1.144}$ and the measured noise level increases with the mixture flow rate and equivalence ratio as $p{\;}{\propto}{\;}U_f^{1.7}$$F^{8.2}$. The nature of flame oscillation and the noise generation mechanisms are also investigated using a high speed CCD camera and a DSRL camera. The repetition of sudden extinction at the tip of flame is evident and the repetition rates are identical to the primary frequencies obtained from the FFT analysis of sound pressure signals. CH chemiluminescence intensities of the oscillating flames were also measured by PMT with a 431 nm(10 FWHM) band pass filter and compared to the pressure signals.

  • PDF

디지털 영화제작을 위한 레드 원 카메라의 활용성 연구 (Using 'RED ONE' Camera for Digital Film Making)

  • 고현욱;민경원
    • 한국콘텐츠학회논문지
    • /
    • 제9권9호
    • /
    • pp.163-170
    • /
    • 2009
  • 본 논문은 새롭게 개발된 레드 원(RED ONE)카메라의 장단점들을 분석하고 분석된 자료를 기존 디지털 카메라와 비교 했다. 영화제작시스템이 디지털영화제작시스템으로 전환하면서 기존 필름카메라 회사인 Arriflex도 디지털 카메라인 Arri D21을 개발했고 고화질 디지털 카메라의 대명사인 Sony도 CineAlta F35를 통해 디지털 영화제작의 선도적인 역할을 수행하고 있다. 이러한 디지털 영화제작의 기술적 진보속에서 레드 원 카메라의 등장은 필름메이커들에게 새로운 가능성을 열어주고 있다. Arri 와 Sony 같이 긴 역사적 배경을 가진 회사와는 다르게 개발 시작부터 현재까지 디지털 영화제작 스테프들의 요구를 최대한 반영하여 디지털 카메라를 발전시킨다는 점이 레드 원만의 독특한 장점 중에 하나일 것이다. 사용자를 배려한 디지털카메라의 개발은 영화제작 환경에 파급효과를 가져왔으며 상업용 디지털영화제작을 주도하게 된 중요한 요인이 되었다.

Assembling three one-camera images for three-camera intersection classification

  • Marcella Astrid;Seung-Ik Lee
    • ETRI Journal
    • /
    • 제45권5호
    • /
    • pp.862-873
    • /
    • 2023
  • Determining whether an autonomous self-driving agent is in the middle of an intersection can be extremely difficult when relying on visual input taken from a single camera. In such a problem setting, a wider range of views is essential, which drives us to use three cameras positioned in the front, left, and right of an agent for better intersection recognition. However, collecting adequate training data with three cameras poses several practical difficulties; hence, we propose using data collected from one camera to train a three-camera model, which would enable us to more easily compile a variety of training data to endow our model with improved generalizability. In this work, we provide three separate fusion methods (feature, early, and late) of combining the information from three cameras. Extensive pedestrian-view intersection classification experiments show that our feature fusion model provides an area under the curve and F1-score of 82.00 and 46.48, respectively, which considerably outperforms contemporary three- and one-camera models.

Time-of-Flight 카메라 영상 보정 (Enhancement on Time-of-Flight Camera Images)

  • 김성희;김명희
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.708-711
    • /
    • 2008
  • Time-of-flight(ToF) cameras deliver intensity data as well as range information of the objects of the scene. However, systematic problems during the acquisition lead to distorted values in both distance and amplitude. In this paper we propose a method to acquire reliable distance information over the entire scene correcting each information based on the other data. The amplitude image is enhanced based on the depth values and this leads depth correction especially for far pixels.

  • PDF