• Title/Summary/Keyword: Camera motion estimation

Search Result 175, Processing Time 0.04 seconds

Range and Velocity Estimation of the Object using a Moving Camera (움직이는 카메라를 이용한 목표물의 거리 및 속도 추정)

  • Byun, Sang-Hoon;Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.12
    • /
    • pp.1737-1743
    • /
    • 2013
  • This paper proposes the range and velocity of the object estimation method using a moving camera. Structure and motion (SaM) estimation is to estimate the Euclidean geometry of the object as well as the relative motion between the camera and object. Unlike the previous works, the proposed estimation method can relax the camera and object motion constraints. To this end, we arrange the dynamics of moving camera-moving object relative motion model in an appropriate form such that the nonlinear observer can be employed for the SaM estimation. Through both simulations and experiments we have confirmed the validity of the proposed estimation algorithm.

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

Motion Estimation of a Moving Object in Three-Dimensional Space using a Camera (카메라를 이용한 3차원 공간상의 이동 목표물의 거리정보기반 모션추정)

  • Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.12
    • /
    • pp.2057-2060
    • /
    • 2016
  • Range-based motion estimation of a moving object by using a camera is proposed. Whereas the existing results constrain the motion of an object for the motion estimation of an object, the constraints on the motion is relieved in the proposed method in that a more generally moving object motion can be handled. To this end, a nonlinear observer is designed based on the relative dynamics between the object and camera so that the object velocity and the unknown camera velocity can be estimated. Stability analysis and simulation results for the moving object are provided to show the effectiveness of the proposed method.

Active Object Tracking using Image Mosaic Background

  • Jung, Young-Kee;Woo, Dong-Min
    • Journal of information and communication convergence engineering
    • /
    • v.2 no.1
    • /
    • pp.52-57
    • /
    • 2004
  • In this paper, we propose a panorama-based object tracking scheme for wide-view surveillance systems that can detect and track moving objects with a pan-tilt camera. A dynamic mosaic of the background is progressively integrated in a single image using the camera motion information. For the camera motion estimation, we calculate affine motion parameters for each frame sequentially with respect to its previous frame. The camera motion is robustly estimated on the background by discriminating between background and foreground regions. The modified block-based motion estimation is used to separate the background region. Each moving object is segmented by image subtraction from the mosaic background. The proposed tracking system has demonstrated good performance for several test video sequences.

Zoom Motion Estimation Method by Using Depth Information (깊이 정보를 이용한 줌 움직임 추정 방법)

  • Kwon, Soon-Kak;Park, Yoo-Hyun;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.2
    • /
    • pp.131-137
    • /
    • 2013
  • Zoom motion estimation of video sequence is very complicated for implementation. In this paper, we propose a method to implement the zoom motion estimation using together the depth camera and color camera. Depth camera obtains the distance information between current block and reference block, then zoom ratio between both blocks is calculated from this distance information. As the reference block is appropriately zoomed by the zoom ratio, the motion estimated difference signal can be reduced. Therefore, the proposed method is possible to increase the accuracy of motion estimation with keeping zoom motion estimation complexity not greater. Simulation was to measure the motion estimation accuracy of the proposed method, we can see the motion estimation error was decreased significantly compared to conventional block matching method.

Modified Particle Filtering for Unstable Handheld Camera-Based Object Tracking

  • Lee, Seungwon;Hayes, Monson H.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.2
    • /
    • pp.78-87
    • /
    • 2012
  • In this paper, we address the tracking problem caused by camera motion and rolling shutter effects associated with CMOS sensors in consumer handheld cameras, such as mobile cameras, digital cameras, and digital camcorders. A modified particle filtering method is proposed for simultaneously tracking objects and compensating for the effects of camera motion. The proposed method uses an elastic registration algorithm (ER) that considers the global affine motion as well as the brightness and contrast between images, assuming that camera motion results in an affine transform of the image between two successive frames. By assuming that the camera motion is modeled globally by an affine transform, only the global affine model instead of the local model was considered. Only the brightness parameter was used in intensity variation. The contrast parameters used in the original ER algorithm were ignored because the change in illumination is small enough between temporally adjacent frames. The proposed particle filtering consists of the following four steps: (i) prediction step, (ii) compensating prediction state error based on camera motion estimation, (iii) update step and (iv) re-sampling step. A larger number of particles are needed when camera motion generates a prediction state error of an object at the prediction step. The proposed method robustly tracks the object of interest by compensating for the prediction state error using the affine motion model estimated from ER. Experimental results show that the proposed method outperforms the conventional particle filter, and can track moving objects robustly in consumer handheld imaging devices.

  • PDF

Dynamic Mosaic based Compression (동적 모자이크 기반의 압축)

  • 박동진;김동규;정영기
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1944-1947
    • /
    • 2003
  • In this paper, we propose a dynamic-based compression system by creating mosaic background and transmitting the change information. A dynamic mosaic of the background is progressively integrated in a single image using the camera motion information. For the camera motion estimation, we calculate affine motion parameters for each frame sequentially with respect to its previous frame. The camera motion is robustly estimated on the background by discriminating between background and foreground regions. The modified block-based motion estimation is used to separate the back-ground region.

  • PDF

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

Camera Motion Detection Using Estimation of Motion Vector's Angle (모션 벡터의 각도 성분 추정을 통한 카메라 움직임 검출)

  • Kim, Jae Ho;Lee, Jang Hoon;Jang, Soeun
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.9
    • /
    • pp.1052-1061
    • /
    • 2018
  • In this paper, we propose a new algorithm that is robust against the effects of objects that are relatively unaffected by camera motion and can accurately detect camera motion even in high resolution images. First, for more accurate camera motion detection, a global motion filter based on entropy of a motion vector is used to distinguish the background and the object. A block matching algorithm is used to find exact motion vectors. In addition, a matched filter with the angle of the ideal motion vector of each block is used. Motion vectors including 4 kinds of diagonal direction, zoom in, and zoom out are added additionally. The experiment shows that the precision, recall, and accuracy of camera motion detection compared to the recent results is improved by 12.5%, 8.6% and 9.5%, respectively.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF