• Title/Summary/Keyword: 3D immersive audio

Search Result 13, Processing Time 0.022 seconds

Standardization of MPEG-I Immersive Audio and Related Technologies (MPEG-I Immersive Audio 표준화 및 기술 동향)

  • Jang, D.Y.;Kang, K.O.;Lee, Y.J.;Yoo, J.H.;Lee, T.J.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.3
    • /
    • pp.52-63
    • /
    • 2022
  • Immersive media, also known as spatial media, has become essential with the decrease in face-to-face activities in the COVID-19 pandemic era. Teleconference, metaverse, and digital twin have been developed with high expectations as immersive media services, and the demand for hyper-realistic media is increasing. Under these circumstances, MPEG-I Immersive Media is being standardized as a technologies of navigable virtual reality, which is expected to be launched in the first half of 2024, and the Audio Group is working to standardize the immersive audio technology. Following this trend, this article introduces the trend in MPEG-I immersive audio standardization. Further, it describes the features of the immersive audio rendering technology, focusing on the structure and function of the RM0 base technology, which was chosen after evaluating all the technologies proposed in the January 2022 "MPEG Audio Meeting."

Spatial Audio Technologies for Immersive Media Services (체감형 미디어 서비스를 위한 공간음향 기술 동향)

  • Lee, Y.J.;Yoo, J.;Jang, D.;Lee, M.;Lee, T.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.3
    • /
    • pp.13-22
    • /
    • 2019
  • Although virtual reality technology may not be deemed as having a satisfactory quality for all users, it tends to incite interest because of the expectation that the technology can allow one to experience something that they may never experience in real life. The most important aspect of this indirect experience is the provision of immersive 3D audio and video, which interacts naturally with every action of the user. The immersive audio faithfully reproduces an acoustic scene in a space corresponding to the position and movement of the listener, and this technology is also called spatial audio. In this paper, we briefly introduce the trend of spatial audio technology in view of acquisition, analysis, reproduction, and the concept of MPEG-I audio standard technology, which is being promoted for spatial audio services.

Visual Object Tracking Fusing CNN and Color Histogram based Tracker and Depth Estimation for Automatic Immersive Audio Mixing

  • Park, Sung-Jun;Islam, Md. Mahbubul;Baek, Joong-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1121-1141
    • /
    • 2020
  • We propose a robust visual object tracking algorithm fusing a convolutional neural network tracker trained offline from a large number of video repositories and a color histogram based tracker to track objects for mixing immersive audio. Our algorithm addresses the problem of occlusion and large movements of the CNN based GOTURN generic object tracker. The key idea is the offline training of a binary classifier with the color histogram similarity values estimated via both trackers used in this method to opt appropriate tracker for target tracking and update both trackers with the predicted bounding box position of the target to continue tracking. Furthermore, a histogram similarity constraint is applied before updating the trackers to maximize the tracking accuracy. Finally, we compute the depth(z) of the target object by one of the prominent unsupervised monocular depth estimation algorithms to ensure the necessary 3D position of the tracked object to mix the immersive audio into that object. Our proposed algorithm demonstrates about 2% improved accuracy over the outperforming GOTURN algorithm in the existing VOT2014 tracking benchmark. Additionally, our tracker also works well to track multiple objects utilizing the concept of single object tracker but no demonstrations on any MOT benchmark.

MPEG Surround Extension Technique for MPEG-H 3D Audio

  • Beack, Seungkwon;Sung, Jongmo;Seo, Jeongil;Lee, Taejin
    • ETRI Journal
    • /
    • v.38 no.5
    • /
    • pp.829-837
    • /
    • 2016
  • In this paper, we introduce extension tools for MPEG Surround, which were recently adopted as MPEG-H 3D Audio tools by the ISO/MPEG standardization group. MPEG-H 3D Audio is a next-generation technology for representing spatial audio in an immersive manner. However, considerably large numbers of input signals can degrade the compression performance during a low bitrate operation. The proposed extension of MPEG Surround was basically designed based on the original MPEG Surround technology, where the limitations of MPEG Surround were revised by adopting a new coding structure. The proposed MPEG-H 3D Audio technologies will play a pivotal role in dramatically improving the sound quality during a lower bitrate operation.

A Study on Immersive Audio Improvement of FTV using an effective noise (유효 잡음을 활용한 FTV 입체음향 개선방안 연구)

  • Kim, Jong-Un;Cho, Hyun-Seok;Lee, Yoon-Bae;Yeo, Sung-Dae;Kim, Seong-Kweon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.2
    • /
    • pp.233-238
    • /
    • 2015
  • In this paper, we proposed that immersive audio effect method using the effective noise to improve engagement in free-viewpoint TV(FTV) service. In the basketball court, we monitored the frequency spectrums by acquiring continuous audio data of players and referee using shotgun and wireless microphone. By analyzing this spectrum, in case that users zoomed in, we determined whether it is effective frequency or not. Therefore when users using FTV service zoom in toward the object, it is proposed that we need to utilize unnecessary noise instead of removing that. it will be able to be useful for an immersive audio implementation of FTV.

A DNN-Based Personalized HRTF Estimation Method for 3D Immersive Audio

  • Son, Ji Su;Choi, Seung Ho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.161-167
    • /
    • 2021
  • This paper proposes a new personalized HRTF estimation method which is based on a deep neural network (DNN) model and improved elevation reproduction using a notch filter. In the previous study, a DNN model was proposed that estimates the magnitude of HRTF by using anthropometric measurements [1]. However, since this method uses zero-phase without estimating the phase, it causes the internalization (i.e., the inside-the-head localization) of sound when listening the spatial sound. We devise a method to estimate both the magnitude and phase of HRTF based on the DNN model. Personalized HRIR was estimated using the anthropometric measurements including detailed data of the head, torso, shoulders and ears as inputs for the DNN model. After that, the estimated HRIR was filtered with an appropriate notch filter to improve elevation reproduction. In order to evaluate the performance, both of the objective and subjective evaluations are conducted. For the objective evaluation, the root mean square error (RMSE) and the log spectral distance (LSD) between the reference HRTF and the estimated HRTF are measured. For subjective evaluation, the MUSHRA test and preference test are conducted. As a result, the proposed method can make listeners experience more immersive audio than the previous methods.

A Full Body Gumdo Game with an Intelligent Cyber Fencer using Multi-modal(3D Vision and Speech) Interface (멀티모달 인터페이스(3차원 시각과 음성 )를 이용한 지능적 가상검객과의 전신 검도게임)

  • 윤정원;김세환;류제하;우운택
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.420-430
    • /
    • 2003
  • This paper presents an immersive multimodal Gumdo simulation game that allows a user to experience the whole body interaction with an intelligent cyber fencer. The proposed system consists of three modules: (i) a nondistracting multimodal interface with 3D vision and speech (ii) an intelligent cyber fencer and (iii) an immersive feedback by a big screen and sound. First, the multimodal Interface with 3D vision and speech allows a user to move around and to shout without distracting the user. Second, an intelligent cyber fencer provides the user with intelligent interactions by perception and reaction modules that are created by the analysis of real Gumdo game. Finally, an immersive audio-visual feedback by a big screen and sound effects helps a user experience an immersive interaction. The proposed system thus provides the user with an immersive Gumdo experience with the whole body movement. The suggested system can be applied to various applications such as education, exercise, art performance, etc.

Real-time 3D Audio Downmixing System based on Sound Rendering for the Immersive Sound of Mobile Virtual Reality Applications

  • Hong, Dukki;Kwon, Hyuck-Joo;Kim, Cheong Ghil;Park, Woo-Chan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5936-5954
    • /
    • 2018
  • Eight out of the top ten the largest technology companies in the world are involved in some way with the coming mobile VR revolution since Facebook acquired Oculus. This trend has allowed the technology related with mobile VR to achieve remarkable growth in both academic and industry. Therefore, the importance of reproducing the acoustic expression for users to experience more realistic is increasing because auditory cues can enhance the perception of the complicated surrounding environment without the visual system in VR. This paper presents a audio downmixing system for auralization based on hardware, a stage of sound rendering pipelines that can reproduce realiy-like sound but requires high computation costs. The proposed system is verified through an FPGA platform with the special focus on hardware architectural designs for low power and real-time. The results show that the proposed system on an FPGA can downmix maximum 5 sources in real-time rate (52 FPS), with 382 mW low power consumptions. Furthermore, the generated 3D sound with the proposed system was verified with satisfactory results of sound quality via the user evaluation.

MPEG-H 3D Audio Decoder Structure and Complexity Analysis (MPEG-H 3D 오디오 표준 복호화기 구조 및 연산량 분석)

  • Moon, Hyeongi;Park, Young-cheol;Lee, Yong Ju;Whang, Young-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.2
    • /
    • pp.432-443
    • /
    • 2017
  • The primary goal of the MPEG-H 3D Audio standard is to provide immersive audio environments for high-resolution broadcasting services such as UHDTV. This standard incorporates a wide range of technologies such as encoding/decoding technology for multi-channel/object/scene-based signal, rendering technology for providing 3D audio in various playback environments, and post-processing technology. The reference software decoder of this standard is a structure combining several modules and can operate in various modes. Each module is composed of independent executable files and executed sequentially, real time decoding is impossible. In this paper, we make DLL library of the core decoder, format converter, object renderer, and binaural renderer of the standard and integrate them to enable frame-based decoding. In addition, by measuring the computation complexity of each mode of the MPEG-H 3D-Audio decoder, this paper also provides a reference for selecting the appropriate decoding mode for various hardware platforms. As a result of the computational complexity measurement, the low complexity profiles included in Korean broadcasting standard has a computation complexity of 2.8 times to 12.4 times that of the QMF synthesis operation in case of rendering as a channel signals, and it has a computation complexity of 4.1 times to 15.3 times of the QMF synthesis operation in case of rendering as a binaural signals.

Yoga of Consilience through Immersive Sound Experience (실감음향 체험을 통한 통섭의 요가)

  • Hyon, Jinoh
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.643-651
    • /
    • 2021
  • Most people acquire information visually. Screens of computers, smart phones, etc. constantly stimulate people's eyes, increasing fatigue. In this social phenomenon, the realistic and rich sound of the 21st century's state-of-art sound system can affect people's bodies and minds in various ways. Through sound, human beings are given space to calm and observe themselves. The purpose of this paper is to introduce immersive yoga training based on 3D sound conducted together by ALgruppe & Rory's PranaLab and to promote the understanding of immersive audio system. As a result, people, experienced immersive yoga, not only enjoy the effect of sound, but also receive a powerful energy that gives them a sense of inner self-awareness. This is a response to multidisciplinary exchange required by the knowledge of modern society, and at the same time, informs the possibility of new cultural contents.