• 제목/요약/키워드: Camera motion estimation

검색결과 175건 처리시간 0.024초

움직이는 카메라를 이용한 목표물의 거리 및 속도 추정 (Range and Velocity Estimation of the Object using a Moving Camera)

  • 변상훈;좌동경
    • 전기학회논문지
    • /
    • 제62권12호
    • /
    • pp.1737-1743
    • /
    • 2013
  • This paper proposes the range and velocity of the object estimation method using a moving camera. Structure and motion (SaM) estimation is to estimate the Euclidean geometry of the object as well as the relative motion between the camera and object. Unlike the previous works, the proposed estimation method can relax the camera and object motion constraints. To this end, we arrange the dynamics of moving camera-moving object relative motion model in an appropriate form such that the nonlinear observer can be employed for the SaM estimation. Through both simulations and experiments we have confirmed the validity of the proposed estimation algorithm.

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • 제37권4호
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

카메라를 이용한 3차원 공간상의 이동 목표물의 거리정보기반 모션추정 (Motion Estimation of a Moving Object in Three-Dimensional Space using a Camera)

  • 좌동경
    • 전기학회논문지
    • /
    • 제65권12호
    • /
    • pp.2057-2060
    • /
    • 2016
  • Range-based motion estimation of a moving object by using a camera is proposed. Whereas the existing results constrain the motion of an object for the motion estimation of an object, the constraints on the motion is relieved in the proposed method in that a more generally moving object motion can be handled. To this end, a nonlinear observer is designed based on the relative dynamics between the object and camera so that the object velocity and the unknown camera velocity can be estimated. Stability analysis and simulation results for the moving object are provided to show the effectiveness of the proposed method.

Active Object Tracking using Image Mosaic Background

  • Jung, Young-Kee;Woo, Dong-Min
    • Journal of information and communication convergence engineering
    • /
    • 제2권1호
    • /
    • pp.52-57
    • /
    • 2004
  • In this paper, we propose a panorama-based object tracking scheme for wide-view surveillance systems that can detect and track moving objects with a pan-tilt camera. A dynamic mosaic of the background is progressively integrated in a single image using the camera motion information. For the camera motion estimation, we calculate affine motion parameters for each frame sequentially with respect to its previous frame. The camera motion is robustly estimated on the background by discriminating between background and foreground regions. The modified block-based motion estimation is used to separate the background region. Each moving object is segmented by image subtraction from the mosaic background. The proposed tracking system has demonstrated good performance for several test video sequences.

깊이 정보를 이용한 줌 움직임 추정 방법 (Zoom Motion Estimation Method by Using Depth Information)

  • 권순각;박유현;권기룡
    • 한국멀티미디어학회논문지
    • /
    • 제16권2호
    • /
    • pp.131-137
    • /
    • 2013
  • 동영상의 줌 움직임 추정은 구현이 아주 복잡하다. 본 논문에서는 줌 움직임 추정을 구현하기 위하여 깊이 카메라와 색상 카메라를 동시에 이용하는 방법을 제안한다. 깊이 카메라로부터 현재블록과 참조블록 사이의 거리 정보를 얻고, 이 거리 정보로부터 두 블록사이의 줌 비율을 계산한다. 줌 비율에 맞게 참조블록을 확대 또는 축소시켜 줌으로서 움직임 추정 차신호를 줄일 수 있다. 따라서, 제안된 방법은 줌 움직임 추정을 위한 복잡도가 크지 않으면서 움직임 추정 정확도를 높이는 것이 가능하다. 모의실험을 바탕으로 제안된 방법의 움직임 추정 정확도를 측정하였으며, 기존 블록정합 방법에 비하여 움직임 추정 오차값이 크게 감소함을 확인하였다.

Modified Particle Filtering for Unstable Handheld Camera-Based Object Tracking

  • Lee, Seungwon;Hayes, Monson H.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권2호
    • /
    • pp.78-87
    • /
    • 2012
  • In this paper, we address the tracking problem caused by camera motion and rolling shutter effects associated with CMOS sensors in consumer handheld cameras, such as mobile cameras, digital cameras, and digital camcorders. A modified particle filtering method is proposed for simultaneously tracking objects and compensating for the effects of camera motion. The proposed method uses an elastic registration algorithm (ER) that considers the global affine motion as well as the brightness and contrast between images, assuming that camera motion results in an affine transform of the image between two successive frames. By assuming that the camera motion is modeled globally by an affine transform, only the global affine model instead of the local model was considered. Only the brightness parameter was used in intensity variation. The contrast parameters used in the original ER algorithm were ignored because the change in illumination is small enough between temporally adjacent frames. The proposed particle filtering consists of the following four steps: (i) prediction step, (ii) compensating prediction state error based on camera motion estimation, (iii) update step and (iv) re-sampling step. A larger number of particles are needed when camera motion generates a prediction state error of an object at the prediction step. The proposed method robustly tracks the object of interest by compensating for the prediction state error using the affine motion model estimated from ER. Experimental results show that the proposed method outperforms the conventional particle filter, and can track moving objects robustly in consumer handheld imaging devices.

  • PDF

동적 모자이크 기반의 압축 (Dynamic Mosaic based Compression)

  • 박동진;김동규;정영기
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.1944-1947
    • /
    • 2003
  • In this paper, we propose a dynamic-based compression system by creating mosaic background and transmitting the change information. A dynamic mosaic of the background is progressively integrated in a single image using the camera motion information. For the camera motion estimation, we calculate affine motion parameters for each frame sequentially with respect to its previous frame. The camera motion is robustly estimated on the background by discriminating between background and foreground regions. The modified block-based motion estimation is used to separate the back-ground region.

  • PDF

단안영상에서 움직임 벡터를 이용한 영역의 깊이추정 (A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence)

  • 손정만;박영민;윤영우
    • 융합신호처리학회논문지
    • /
    • 제5권2호
    • /
    • pp.96-105
    • /
    • 2004
  • 2차원 이미지로부터 3차원 이미지 복원은 각 픽셀까지의 깊이 정보가 필요하고, 3차원 모델의 복원에 관한 일반적인 수작업은 많은 시간과 비용이 소모된다. 본 논문의 목표는 카메라가 이동하는 중에, 획득된 단안 영상에서 영역의 상대적인 깊이 정보를 추출하는 것이다. 카메라 이동에 의한 영상의 모든 점들의 움직임은 깊이 정보에 종속적이라는 사실에 기반을 두고 있다. 전역 탐색 기법을 사용하여 획득한 움직임 벡터에서 카메라 회전과 배율에 관해서 보상을 한다. 움직임 벡터를 분석하여 평균 깊이를 측정하고, 평균 깊이에 대한 각 영역의 상대적 깊이를 구하였다. 실험결과 영역의 상대적인 깊이는 인간이 인식하는 상대적인 깊이와 일치한다는 것을 보였다.

  • PDF

모션 벡터의 각도 성분 추정을 통한 카메라 움직임 검출 (Camera Motion Detection Using Estimation of Motion Vector's Angle)

  • 김재호;이장훈;장소은
    • 한국멀티미디어학회논문지
    • /
    • 제21권9호
    • /
    • pp.1052-1061
    • /
    • 2018
  • In this paper, we propose a new algorithm that is robust against the effects of objects that are relatively unaffected by camera motion and can accurately detect camera motion even in high resolution images. First, for more accurate camera motion detection, a global motion filter based on entropy of a motion vector is used to distinguish the background and the object. A block matching algorithm is used to find exact motion vectors. In addition, a matched filter with the angle of the ideal motion vector of each block is used. Motion vectors including 4 kinds of diagonal direction, zoom in, and zoom out are added additionally. The experiment shows that the precision, recall, and accuracy of camera motion detection compared to the recent results is improved by 12.5%, 8.6% and 9.5%, respectively.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF