• Title/Summary/Keyword: Virtual Camera

Search Result 477, Processing Time 0.029 seconds

Determination of Optimal Position of an Active Camera System Using Inverse Kinematics of Virtual Link Model and Manipulability Measure (가상 링크 모델의 역기구학과 조작성을 이용한 능동 카메라 시스템의 최적 위치 결정에 관한 연구)

  • Chu, Gil-Whoan;Cho, Jae-Soo;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.239-242
    • /
    • 2003
  • In this paper, we propose how to determine the optimal camera position using inverse kinematics of virtual link model and manipulability measure. We model the variable distance and viewing direction between a target object and a camera position as a virtual link. And, by using inverse kinematics of virtual link model, we find out regions that satisfy the direction and distance constraints for the observation of target object. The solution of inverse kinematics of virtual link model simultaneously satisfies camera accessibility as well as a direction and distance constraints. And we use a manipulability measure of active camera system in order to determine an optimal camera position among the multiple solutions of inverse kinematics. By using the inverse kinematics of virtual link model and manipulability measure, the optimal camera position in order to observe a target object can be determined easily and rapidly.

  • PDF

Augmented Reality Using Projective Information (비유클리드공간 정보를 사용하는 증강현실)

  • 서용덕;홍기상
    • Journal of Broadcast Engineering
    • /
    • v.4 no.2
    • /
    • pp.87-102
    • /
    • 1999
  • We propose an algorithm for augmenting a real video sequence with views of graphics ojbects without metric calibration of the video camera by representing the motion of the video camera in projective space. We define a virtual camera, through which views of graphics objects are generated. attached to the real camera by specifying image locations of the world coordinate system of the virtual world. The virtual camera is decomposed into calibration and motion components in order to make full use of graphics tools. The projective motion of the real camera recovered from image matches has a function of transferring the virtual camera and makes the virtual camera move according to the motion of the real camera. The virtual camera also follows the change of the internal parameters of the real camera. This paper shows theoretical and experimental results of our application of non-metric vision to augmented reality.

  • PDF

Virtual portraits from rotating selfies

  • Yongsik Lee;Jinhyuk Jang;SeungjoonYang
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.291-303
    • /
    • 2023
  • Selfies are a popular form of photography. However, due to physical constraints, the compositions of selfies are limited. We present algorithms for creating virtual portraits with interesting compositions from a set of selfies. The selfies are taken at the same location while the user spins around. The scene is analyzed using multiple selfies to determine the locations of the camera, subject, and background. Then, a view from a virtual camera is synthesized. We present two use cases. After rearranging the distances between the camera, subject, and background, we render a virtual view from a camera with a longer focal length. Following that, changes in perspective and lens characteristics caused by new compositions and focal lengths are simulated. Second, a virtual panoramic view with a larger field of view is rendered, with the user's image placed in a preferred location. In our experiments, virtual portraits with a wide range of focal lengths were obtained using a device equipped with a lens that has only one focal length. The rendered portraits included compositions that would be photographed with actual lenses. Our proposed algorithms can provide new use cases in which selfie compositions are not limited by a camera's focal length or distance from the camera.

A Study on Visual Servoing Application for Robot OLP Compensation (로봇 OLP 보상을 위한 시각 서보잉 응용에 관한 연구)

  • 김진대;신찬배;이재원
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.4
    • /
    • pp.95-102
    • /
    • 2004
  • It is necessary to improve the exactness and adaptation of the working environment in the intelligent robot system. The vision sensor have been studied for this reason fur a long time. However, it is very difficult to perform the camera and robot calibrations because the three dimensional reconstruction and many processes are required for the real usages. This paper suggests the image based visual servoing to solve the problem of old calibration technique and supports OLP(Off-Line-Programming) path compensation. Virtual camera can be modeled from the real factors and virtual images obtained from virtual camera gives more easy perception process. Also, Initial path generated from OLP could be compensated by the pixel level acquired from the real and virtual, respectively. Consequently, the proposed visually assisted OLP teaching remove the calibration and reconstruction process in real working space. With a virtual simulation, the better performance is observed and the robot path error is calibrated by the image differences.

WALK-THROUGH VIEW FOR FTV WITH CIRCULAR CAMERA SETUP

  • Uemori, Takeshi;Yendo, Tomohiro;Tanimoto, Masayuki;Fujii, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.727-731
    • /
    • 2009
  • In this paper, we propose a method to generate a free viewpoint image using multi-viewpoint images which are taken by cameras arranged circularly. In past times, we have proposed the method to generate a free viewpoint image based on Ray-Space method. However, with that method, we can not generate a walk-through view seen from a virtual viewpoint among objects. The method we propose in this paper realizes the generation of such view. Our method gets information of the positions of objects using shape from silhouette method at first, and selects appropriate cameras which acquired rays needed for generating a virtual image. A free viewpoint image can be generated by collecting rays which pass over the focal point of a virtual camera. However, when the requested ray is not available, it is necessary to interpolate it from neighboring rays. Therefore, we estimate the depth of the objects from a virtual camera and interpolate ray information to generate the image. In the experiments with the virtual sequences which were captured at every 6 degrees, we set the virtual camera at user's choice and generated the image from that viewpoint successfully.

  • PDF

Development and Test of the Remote Operator Visual Support System Based on Virtual Environment (가상환경기반 원격작업자 시각지원시스템 개발 및 시험)

  • Song, T.G.;Park, B.S.;Choi, K.H.;Lee, S.H.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.429-439
    • /
    • 2008
  • With a remote operated manipulator system, the situation at a remote site can be rendered through remote visualized image to the operator. Then the operator can quickly realize situations and control the slave manipulator by operating a master input device based on the information of the virtual image. In this study, the remote operator visual support system (ROVSS) was developed for viewing support of a remote operator to perform the remote task effectively. A visual support model based on virtual environment was also inserted and used to fulfill the need of this study. The framework for the system was created by Windows API based on PC and the library of 3D graphic simulation tool such as ENVISION. To realize this system, an operation test environment for a limited operating site was constructed by using experimental robot operation. A 3D virtual environment was designed to provide accurate information about the rotation of robot manipulator, the location and distance of operation tool through the real time synchronization. In order to show the efficiency of the visual support, we conducted the experiments by four methods such as the direct view, the camera view, the virtual view and camera view plus virtual view. The experimental results show that the method of camera view plus virtual view has about 30% more efficiency than the method of camera view.

In-camera VFX implementation study using short-throw projector (focused on low-cost solution)

  • Li, Penghui;Kim, Ki-Hong;Lee, David-Junesok
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.10-16
    • /
    • 2022
  • As an important part of virtual production, In-camera VFX is the process of shooting actual objects and virtual three-dimensional backgrounds in real-time through computer graphics technology and display technology, and obtaining the final film. In the In-camera VFX process, there are currently only two types of medium used to undertake background imaging, LED wall and chroma key screen. Among them, the In-camera VFX based on LED wall realizes background imaging through LED display technology. Although the imaging quality is guaranteed, the high cost of LED wall increases the cost of virtual production. The In-camera VFX based on chroma key screen, the background imaging is realized by real-time keying technology. Although the price is low, due to the limitation of real-time keying technology and lighting conditions, the usability of the final picture is not high. The short-throw projection technology can compress the projection distance to within 1 meter and get a relatively large picture, which solves the problem of traditional projection technology that must leaving a certain space between screen and the projector, and its price is relatively cheap compared to the LED wall. Therefore, in the In-camera VFX process, short-throw projection technology can be tried to project backgrounds. This paper will analyze the principle of short-throw projection technology and the existing In-camera VFX solutions, and through the comparison experiments, propose a low-cost solution that uses short-throw projectors to project virtual backgrounds and realize the In-camera VFX process.

Extraction of Camera Parameters Using Projective Invariance for Virtual Studio

  • Han, Seo-Won;Lee, Joon-Whaon;Nakajima, Masayuki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.141-146
    • /
    • 1998
  • Currently virtual studio has used the cromakey method in which an image is captured, and the blue portion of that image is replaced by a graphic image or a real image. The replaced image must be changed according to the camera motion. This paper proposes a novel method to extract camera parameters using the recognition of pentagonal patterns which are painted on the blue screen. The corresponding parameters are position, direction and focal length of the camera in the virtual studio. At first, pentagonal patterns are found using invariant features of the pentagon. Then, the projective transformation of two projected images and the camera parameters are calculated using the matched points. Simulation results indicate that camera parameters are more easily calculated compared to the conventional methods.

  • PDF

A Study on Applying Proxemics to Camera Position in VR Animation

  • Qu, Lin;Yun, Tae-Soo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.73-83
    • /
    • 2021
  • With the development of science and technology, virtual reality (VR) has become increasingly popular, being widely used in various fields such as aviation, education, medical science, culture, art, and entertainment. This technology with great potential has changed the way of human-computer interaction and the way people live and entertain. In the field of animation, virtual reality also brings a new viewing form and immersive experience. The paper demonstrates the production of VR animation and then discusses camera's position in VR animation. Where to place the VR camera to bring a comfortable viewing experience. The paper, with the proxemics as its theoretical framework, proposes the hypothesis about the camera position. Then the hypothesis is verified by a series of experiments in animation to discuss the correlation between camera position and proxemics theory.

VIRTUAL VIEW RENDERING USING MULTIPLE STEREO IMAGES

  • Ham, Bum-Sub;Min, Dong-Bo;Sohn, Kwang-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.233-237
    • /
    • 2009
  • This paper represents a new approach which addresses quality degradation of a synthesized view, when a virtual camera moves forward. Generally, interpolation technique using only two neighboring views is used when a virtual view is synthesized. Because a size of the object increases when the virtual camera moves forward, most methods solved this by interpolation in order to synthesize a virtual view. However, as it generates a degraded view such as blurred images, we prevent a synthesized view from being blurred by using more cameras in multiview camera configuration. That is, we solve this by applying super-resolution concept which reconstructs a high resolution image from several low resolution images. Therefore, data fusion is executed by geometric warping using a disparity of the multiple images followed by deblur operation. Experimental results show that the image quality can further be improved by reducing blur in comparison with interpolation method.

  • PDF