• 제목/요약/키워드: Action cameras

검색결과 38건 처리시간 0.023초

360VR을 활용한 영화촬영 환경을 위한 제작 효율성 연구 (A Study on the Production Efficiency of Movie Filming Environment Using 360° VR)

  • 이영숙;김정환
    • 한국멀티미디어학회논문지
    • /
    • 제19권12호
    • /
    • pp.2036-2043
    • /
    • 2016
  • The $360^{\circ}$ Virtual Reality (VR) live-action movies are filmed by attaching multiple cameras to a rig to shoot the images omni-directionally. Especially, for a live-action film that requires a variety of scenes, the director of photography and his staff usually have to operate the rigged cameras directly all around the scene and edit the footage during the post-production stage so that the entire process can incur much time and high cost. However, it will also be possible to acquire high-quality omni-directional images with fewer staff if the camera rig(s) can be controlled remotely to allow more flexible camera walking. Thus, a $360^{\circ}$ VR filming system with remote-controlled camera rig has been proposed in this study. The movie producers will be able to produce the movies that provide greater immersion with this system.

Traffic Safety Recommendation Using Combined Accident and Speeding Data

  • Onuean, Athita;Lee, Daesung;Jung, Hanmin
    • Journal of information and communication convergence engineering
    • /
    • 제18권1호
    • /
    • pp.49-54
    • /
    • 2020
  • Speed enforcement is one of the major challenges in traffic safety. The increasing number of accidents and fatalities has led governments to respond by implementing an intelligent control system. For example, the Korean government implemented a speed camera system for maintaining road safety. However, many drivers still engage in speeding behavior in blackspot areas where speed cameras are not provided. Therefore, we propose a methodology to analyze the combined accident and speeding data to offer recommendations to maintain traffic safety. We investigate three factors: "section," "existing speed camera location," and "over speeding data." To interpret the results, we used the QGIS tool for visualizing the spatial distribution of the incidents. Finally, we provide four recommendations based on the three aforementioned factors: "investigate with experts," "no action," "install fixed speed cameras," and "deploy mobile speed cameras."

Human Action Recognition Using Deep Data: A Fine-Grained Study

  • Rao, D. Surendra;Potturu, Sudharsana Rao;Bhagyaraju, V
    • International Journal of Computer Science & Network Security
    • /
    • 제22권6호
    • /
    • pp.97-108
    • /
    • 2022
  • The video-assisted human action recognition [1] field is one of the most active ones in computer vision research. Since the depth data [2] obtained by Kinect cameras has more benefits than traditional RGB data, research on human action detection has recently increased because of the Kinect camera. We conducted a systematic study of strategies for recognizing human activity based on deep data in this article. All methods are grouped into deep map tactics and skeleton tactics. A comparison of some of the more traditional strategies is also covered. We then examined the specifics of different depth behavior databases and provided a straightforward distinction between them. We address the advantages and disadvantages of depth and skeleton-based techniques in this discussion.

A study for finding human non-habitual behavior in daily life

  • Shimada, Yasuyuki;Matsumoto, Tsutomu;Kawaji, Shigeyasu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.491-496
    • /
    • 2003
  • This paper proposes modeling of human behavior and a method of finding irregular human behavior. At first, human behavior model is proposed by paying attention to habitual human behavior at home. Generally, it is difficult to obtain the information of individual life pattern because of high cost for setting sensors such as cameras to observe human action. Therefore we capture turning on/off consumer electronic equipments as actual human behavior action, where some or many consumer electric equipments were used such as television, room light, video and so on in our daily life. Noting that are some relations between turning on/off those consumer electric equipments and our action, we proposes how to construct a human behavior knowledge by analyzing human behavior based on observation of human habitual life. Also an algorithm to identify on find irregular behavior different from habitual life behavior are described. Finally, the significance of the proposed method is shown by some experimental results.

  • PDF

A Distributed Real-time 3D Pose Estimation Framework based on Asynchronous Multiviews

  • Taemin, Hwang;Jieun, Kim;Minjoon, Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권2호
    • /
    • pp.559-575
    • /
    • 2023
  • 3D human pose estimation is widely applied in various fields, including action recognition, sports analysis, and human-computer interaction. 3D human pose estimation has achieved significant progress with the introduction of convolutional neural network (CNN). Recently, several researches have proposed the use of multiview approaches to avoid occlusions in single-view approaches. However, as the number of cameras increases, a 3D pose estimation system relying on a CNN may lack in computational resources. In addition, when a single host system uses multiple cameras, the data transition speed becomes inadequate owing to bandwidth limitations. To address this problem, we propose a distributed real-time 3D pose estimation framework based on asynchronous multiple cameras. The proposed framework comprises a central server and multiple edge devices. Each multiple-edge device estimates a 2D human pose from its view and sendsit to the central server. Subsequently, the central server synchronizes the received 2D human pose data based on the timestamps. Finally, the central server reconstructs a 3D human pose using geometrical triangulation. We demonstrate that the proposed framework increases the percentage of detected joints and successfully estimates 3D human poses in real-time.

언리얼엔진과 액션 카메라 시점을 활용한 1인칭 공포 게임 개발 (Developing a first-person horror game using Unreal Engine and an action camera perspective)

  • 김남영;주영민;허원회
    • 한국인터넷방송통신학회논문지
    • /
    • 제24권1호
    • /
    • pp.75-81
    • /
    • 2024
  • 본 논문에서는 1인칭 3D 게임을 개발하여 액션 카메라의 특징을 활용한 현실적인 카메라 연출을 통해 플레이어에게 극한의 공포를 제공하는 데 중점을 두고 있다. 새로운 카메라 연출 기법으로 광각 렌즈를 사용한 시점 왜곡과 이동 시 카메라 흔들림을 도입하여 기존 게임보다 더 높은 몰입도를 제공하고자 한다. 게임의 주제느 공포 방 탈출이며, 플레이어는 총기를 소지하고 시작한다. 그러나 총기 사용으로 인한 게임의 난도가 낮다는 우려를 극복하기 위해 몬스터 추격과 탄창 수 감소 등의 부담감을 부여하여 플레이어에게 총기 사용을 조절하도록 하였다. 본 논문은 사실적인 연출을 통해 플레이어들의 공포 효과를 극대화 한 새로운 방식의 3D게임을 개발하였다는 데 그 의의가 있다.

3D컴퓨터그래픽스 가상현실 애니메이션 카메라와 실제카메라의 비교 연구 - Maya, Softimage 3D, XSI 소프트웨어와 실제 정사진과 동사진 카메라를 중심으로 (A study on comparison between 3D computer graphics cameras and actual cameras)

  • 강종진
    • 만화애니메이션 연구
    • /
    • 통권6호
    • /
    • pp.193-220
    • /
    • 2002
  • The world being made by computers showing great expanses and complex and various expression provides not simply communication places but also a new civilization and a new creative world. Among these, 3D computer graphics, 3D animation and virtual reality technology wore sublimated as a new culture and a new genre of art by joining graphic design and computer engineering. In this study, I tried to make a diagnosis of possibilities, limits and differences of expression in the area of virtual reality computer graphics animation as a comparison between camera action, angle of actual still camera and film camera and virtual software for 3D computer graphics software - Maya, XSI, Softimage3D.

  • PDF

실시간 범죄행위 감지를 위한 영상시스템 (Video System for Real-time Criminal Activity Detection)

  • 신광성;신성윤
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 춘계학술대회
    • /
    • pp.357-358
    • /
    • 2021
  • 여러 대의 감시 카메라로 현장을 감시하는 사람들이 많지만 범죄가 발생했을 때 즉시 조치를 취할 수 있다고 보장하기는 어렵다. 따라서 엘리베이터에 설치된 여러 대의 감시 카메라에서 실시간으로 영상을 분석하고 즉각적인 범죄 경보를 호출하고 범죄 현장과 시간을 효과적으로 추적 할 수 있는 "범죄 행위 탐지 시스템"이 필요하다. 본 논문에서는 Scene Change Detection을 이용하여 엘리베이터에서 발생하는 폭력적인 장면을 감지하기 위한 연구를 수행하였다. 효과적인 검출을 위해 컬러 히스토그램과 히스토그램을 조합 한 x2-컬러 히스토그램을 적용하였다.

  • PDF

보행자의 검출 및 추적을 기반으로 한 실시간 이상행위 분석 시스템 (Real-time Abnormal Behavior Analysis System Based on Pedestrian Detection and Tracking)

  • 김도훈;박상현
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 춘계학술대회
    • /
    • pp.25-27
    • /
    • 2021
  • 최근 딥러닝 기술의 발전으로 CCTV 카메라를 통해 획득한 영상 정보에서 객체의 이상행동을 분석하기 위한 컴퓨터 비전 기반 AI 기술들이 연구되었다. 위험 지역이나 보안 지역에는 범죄 예방 및 경계 감시를 위해 감시카메라가 설치되어 있는 경우가 다수 존재한다. 이러한 이유로 기업들에서는 감시카메라 환경에서 침입, 배회, 낙상, 폭행 같은 주요한 상황을 판단하기 위한 연구들이 진행되고 있다. 본 논문에서는 객체 검출 및 추적 방법을 사용한 실시간 이상 행위 분석 알고리즘을 제안한다.

  • PDF

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.