• Title/Summary/Keyword: Camera Position

Search Result 1,275, Processing Time 0.039 seconds

Determination of Optimal Position of an Active Camera System Using Inverse Kinematics of Virtual Link Model and Manipulability Measure (가상 링크 모델의 역기구학과 조작성을 이용한 능동 카메라 시스템의 최적 위치 결정에 관한 연구)

  • Chu, Gil-Whoan;Cho, Jae-Soo;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.239-242
    • /
    • 2003
  • In this paper, we propose how to determine the optimal camera position using inverse kinematics of virtual link model and manipulability measure. We model the variable distance and viewing direction between a target object and a camera position as a virtual link. And, by using inverse kinematics of virtual link model, we find out regions that satisfy the direction and distance constraints for the observation of target object. The solution of inverse kinematics of virtual link model simultaneously satisfies camera accessibility as well as a direction and distance constraints. And we use a manipulability measure of active camera system in order to determine an optimal camera position among the multiple solutions of inverse kinematics. By using the inverse kinematics of virtual link model and manipulability measure, the optimal camera position in order to observe a target object can be determined easily and rapidly.

  • PDF

A Study on Applying Proxemics to Camera Position in VR Animation

  • Qu, Lin;Yun, Tae-Soo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.73-83
    • /
    • 2021
  • With the development of science and technology, virtual reality (VR) has become increasingly popular, being widely used in various fields such as aviation, education, medical science, culture, art, and entertainment. This technology with great potential has changed the way of human-computer interaction and the way people live and entertain. In the field of animation, virtual reality also brings a new viewing form and immersive experience. The paper demonstrates the production of VR animation and then discusses camera's position in VR animation. Where to place the VR camera to bring a comfortable viewing experience. The paper, with the proxemics as its theoretical framework, proposes the hypothesis about the camera position. Then the hypothesis is verified by a series of experiments in animation to discuss the correlation between camera position and proxemics theory.

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF

Development of a Remote Object's 3D Position Measuring System (원격지 물체의 삼차원 위치 측정시스템의 개발)

  • Park, Kang
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.8
    • /
    • pp.60-70
    • /
    • 2000
  • In this paper a 3D position measuring device that finds the 3D position of an arbitarily placed object using a camersa system is introduced. The camera system consists of three stepping motors and a CCD camera and a laser. The viewing direction of the camera is controlled by two stepping motors (pan and tilt motors) and the direction of a laser is also controlled by a stepping motors(laser motor). If an object in a remote place is selected from a live video image the x,y,z coordinates of the object with respect to the reference coordinate system can be obtained by calculating the distance from the camera to the object using a structured light scheme and by obtaining the orientation of the camera that is controlled by two stepping motors. The angles o f stepping motors are controlled by a SGI O2 workstation through a parallel port. The mathematical model of the camera and the distance measuring system are calibrated to calculate an accurate position of the object. This 3D position measuring device can be used to acquire information that is necessary to monitor a remote place.

  • PDF

Camera Exterior Parameters Based on Vector Inner Production Application: Absolute Orientation (벡터내적 기반 카메라 외부 파라메터 응용 : 절대표정)

  • Chon, Jae-Choon;Sastry, Shankar
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.1
    • /
    • pp.70-74
    • /
    • 2008
  • In the field of camera motion research, it is widely held that the position (movement) and pose (rotation) of cameras are correlated and cannot be independently separated. A new equation based on inner product is proposed here to independently separate the position and pose. It is proved that the position and pose are not correlated and the equation is applied to estimation of the camera exterior parameters using a real image and 3D data.

Estimation of the position and orientation of the mobile robot using camera calibration (카메라 캘리브레이션을 이용한 이동로봇의 위치 및 자세 추정)

  • 정기주;최명환;이범희;고명삼
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.786-791
    • /
    • 1992
  • When a mobile robot moves from one place to another, position error occurs due to the limit of accuracy of robot and the effect of environmental noise. In this paper. an accurate method of estimating the position and orientation of a mobile robot using the camera calibration is proposed. Kalman filter is used as the estimation algorithm. The uncertainty in the position of camera with repect to robot base frame is considered well as the position error of the robot. Besides developing the mathematical model for mobile robot calibration system, the effect of relative position between camera and calibration points is analyzed and the method to select the most accurate calibration points is also presented.

  • PDF

Position estimation of welding panels for sub-assembly welding line in shipbuilding using camera vision system (조선 소조립 용접자동화의 부재위치 인식을 위한 camera vision system)

  • 전바롬;윤재웅;고국원;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.361-364
    • /
    • 1997
  • There has been requested to automate the welding process in shipyard due to its dependence on skilled operators and the inferior working environments. According to these demands, multiple robot welding system for sub-assembly welding line has been developed, realized and installed at Keoje Shipyard. In order to realize automatic welding system, robots have to be equipped with the sensing system to recognize the position of the welding panels. In this research, a camera vision system is developed to detect the position of base panels for subassembly line in shipbuilding. Two camera vision systems are used in two different stages (Mounting and Welding) to automate the recognition and positioning of welding lines. For automatic recognition of panel position, various image processing algorithms are proposed in this paper.

  • PDF

Automatic Camera Pose Determination from a Single Face Image

  • Wei, Li;Lee, Eung-Joo;Ok, Soo-Yol;Bae, Sung-Ho;Lee, Suk-Hwan;Choo, Young-Yeol;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1566-1576
    • /
    • 2007
  • Camera pose information from 2D face image is very important for making virtual 3D face model synchronize with the real face. It is also very important for any other uses such as: human computer interface, 3D object estimation, automatic camera control etc. In this paper, we have presented a camera position determination algorithm from a single 2D face image using the relationship between mouth position information and face region boundary information. Our algorithm first corrects the color bias by a lighting compensation algorithm, then we nonlinearly transformed the image into $YC_bC_r$ color space and use the visible chrominance feature of face in this color space to detect human face region. And then for face candidate, use the nearly reversed relationship information between $C_b\;and\;C_r$ cluster of face feature to detect mouth position. And then we use the geometrical relationship between mouth position information and face region boundary information to determine rotation angles in both x-axis and y-axis of camera position and use the relationship between face region size information and Camera-Face distance information to determine the camera-face distance. Experimental results demonstrate the validity of our algorithm and the correct determination rate is accredited for applying it into practice.

  • PDF

Human Head Mouse System Based on Facial Gesture Recognition

  • Wei, Li;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1591-1600
    • /
    • 2007
  • Camera position information from 2D face image is very important for that make the virtual 3D face model synchronize to the real face at view point, and it is also very important for any other uses such as: human computer interface (face mouth), automatic camera control etc. We present an algorithm to detect human face region and mouth, based on special color features of face and mouth in $YC_bC_r$ color space. The algorithm constructs a mouth feature image based on $C_b\;and\;C_r$ values, and use pattern method to detect the mouth position. And then we use the geometrical relationship between mouth position information and face side boundary information to determine the camera position. Experimental results demonstrate the validity of the proposed algorithm and the Correct Determination Rate is accredited for applying it into practice.

  • PDF

Infrared Sensitive Camera Based Finger-Friendly Interactive Display System

  • Ghimire, Deepak;Kim, Joon-Cheol;Lee, Kwang-Jae;Lee, Joon-Whoan
    • International Journal of Contents
    • /
    • v.6 no.4
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper we present a system that enables the user to interact with large display system even without touching the screen. With two infrared sensitive cameras mounted on the bottom left and bottom right of the display system pointing upwards, the user fingertip position on the selected region of interest of each camera view is found using vertical intensity profile of the background subtracted image. The position of the finger in two images of left and right camera is mapped to the display screen coordinate by using pre-determined matrices, which are calculated by interpolating samples of user finger position on the images taken by pointing finger over some known coordinate position of the display system. The screen is then manipulated according to the calculated position and depth of the fingertip with respect to the display system. Experimental results demonstrate an efficient, robust and stable human computer interaction.