DOI QR코드

DOI QR Code

A Study on "A Midsummer Night's Palace" Using VR Sound Engineering Technology

  • Received : 2020.11.19
  • Accepted : 2020.12.17
  • Published : 2020.12.28

Abstract

VR (Virtual Reality) contents make the audience perceive virtual space as real through the virtual Z axis which creates a space that could not be created in 2D due to the space between the eyes of the audience. This visual change has led to the need for technological changes to sound and sound sources inserted into VR contents. However, studies to increase immersion in VR contents are still more focused on scientific and visual fields. This is because composing and producing VR sounds require professional views in two areas: sound-based engineering and computer-based interactive sound engineering. Sound-based engineering is difficult to reflect changes in user interaction or time and space by directing the sound effects, script sound, and background music according to the storyboard organized by the director. However, it has the advantage of producing the sound effects, script sound, and background music in one track and not having to go through the coding phase. Computer-based interactive sound engineering, on the other hand, is produced in different files, including the sound effects, script sound, and background music. It can increase immersion by reflecting user interaction or time and space, but it can also suffer from noise cancelling and sound collisions. Therefore in this study, the following methods were devised and utilized to produce sound for VR contents called "A Midsummer Night" so as to take advantage of each sound-making technology. First, the storyboard is analyzed according to the user's interaction. It is to analyze sound effects, script sound, and background music which is required according to user interaction. Second, the sounds are classified and analyzed as 'simultaneous sound' and 'individual sound'. Thirdly, work on interaction coding for sound effects, script sound, and background music that were produced from the simultaneous sound and individual time sound categories is done. Then, the contents are completed by applying the sound to the video. By going through the process, sound quality inhibitors such as noise cancelling can be removed while allowing sound production that fits to user interaction and time and space.

Keywords

1. Introduction

Recently, various contents using VR (Virtual Reality) technology are appearing in the music industry. Liam Payne, a member of the British boy band One Direction, is also planning to hold a VR concert in collaboration with music technology company Melody VR [1]. In fact, the head of KT New Media Project predicts that in 2022, the VR global market will be worth 11.2 trillion won, which is about 12 times larger than last year [2]. As such, the reason VR technology is more popular in the cultural content market than other technologies seem to be due to the feature of being able to achieve realistic effects despite being virtual. Although different researchers have different opinions on the concept of VR technology, 'Presence' and'; immersion' are essential requirements of a virtual reality system that appears to be in common.

'Presence' is an essential requirement of a virtual reality system that is interpreted as 'the sense of presence'; or 'the sense of reality'. In the case of translating with a sense of reality, it is a technical perspective that focuses on the feeling that the virtual space is real [3]. On the contrary, when translating into a sense of reality, it means not feeling a real experience in a mediated environment, but feeling the environment embodied through VR technology as if it were physically sensed. In other words, the focus is on being directly in the environment and feeling the senses. Although there is a difference between the technical and receptive perspectives, it can be that the space created through VR technology is commonly felt as a reality. Immersion, which is interpreted as “immersion”, is also an important essential requirement of a virtual reality system. VR technology provides a realistic feeling, only when there is a sense of immersion [4]. In other words, immersion is an important factor in increasing the feeling that the senses seem to work in reality to the user.

'Presence' and 'Immersion', which makes the virtual feel as if it is reality, are derived from VR technology that constructs a space using the virtual Z-axis. Through VR technology that creates a virtual Z-axis in a positive direction toward the audience, the audience feels a sense of space using 360°. In other words, the existing 2D creates a space that could not be realized due to the viewer's binocular parallax through the virtual Z axis, and the viewer perceives the virtual space as if it were reality [5]. Accordingly, it is expected that the sound and sound sources inserted into VR contents will also need technical changes. This is because the sound operation technology used in the existing 2D contents has no choice but to hinder the sense of reality and immersion of VR contents. In other words, it was confirmed that it was necessary to materialize the methodology according to the sound domain in accordance with the technology to implement virtual reality like reality.

However, studies to create the immersion and reality of VR contents are still more focused on the scientific and technological fields and the visual fields. This is because composing and producing VR sound is a work that requires expert views in two fields: sound engineering and computer-based interactive sound engineering. In fact, the audio plug-in called spatial-sound, which is used by companies such as Facebook and YouTube, encodes sound files in a binaural method, a two-channel surround format. Through this, Facebook users can hear sounds suitable for their environment through earphones, but these sounds have a limitation in rendering the sound over a fixed time without considering the user's interaction. This limitation is because experts in two fields, sound engineering for the existing music industry and computer-based interactive sound engineering, are developing VR sound in each field [6]. The computer-based engineering sound uses a method of coding and producing all sounds that occur in VR contents, not one audio file, according to user interaction and visual effects. This method has the advantage of maximizing the immersion of VR users because it is individually coded according to the user's interaction, but noise canceling [7] occurs as a representative problem. In other words, when creating VR contents by coding multiple sound files, noise canceling inevitably occurs when sound files created by the conventional production method are simultaneously output. For this reason, many VR animation contents prefer VR produced with sound engineering sound to prevent noise cancellation.

On the other hand, sound-based engineering sound also has its limitations. It has the advantage of avoiding noise canceling and creating an effective sound effect based on accumulated data from sound engineering, but has a limitation in rendering the sound over a fixed time without considering the user's interaction. Since the time of VR content changes in real time according to user interaction and the sound is output within a fixed time, the sound does not change according to the user's interaction, which inevitably reduces the user's immersion. Therefore, this study aims to analyze the musical characteristics and advantages and disadvantages of VR contents produced with computer-based engineering sound and VR contents produced with sound-based engineering sound. In addition, through the VR content “A Midsummer Night's Palace”, which the researcher participated in, I would like to propose a musical directing methodology that considers both computer-based engineering sound and sound-based engineering sound. Finally, I would like to examine the strengths and limitations of this sound production methodology.

2. Theoretical Background

2.1 Sound-based engineering

The sound-based engineering method proceeds in a total of three stages and proceeds almost without difference from the sound production method in the existing music industry. Microscopically, different engineers may use different techniques depending on their preferred method of mixing, but when viewed macroscopically, it proceeds in the same way as in Figure 1.

E1CTBR_2020_v16n4_68_f0001.png 이미지

Figure 1. sound-based engineering method

The reason it proceeds similarly to the existing sound production method is that the storyboard is also produced similarly to the existing method. Unlike existing content that delivers director-centered storytelling to consumers, VR content must draw the consumer's gaze, which has become subject to gaze, as directed by the director [8]. However, the storyboard, which used to be the basis of video production, was the concept of visualizing only a part or important part of the work before film [9]. As the video production method also proceeds with the existing storyboard method, the sound production naturally has no choice but to follow the existing method. For this reason, VR contents do not utilize the advantages of free user interaction, and the story proceeds centering on the composition produced. It directs sound effects, dialogue, and music according to the storyboard that was primarily directed. Since the viewpoint progresses according to the intention of the director who composes the storyboard, the user's interactions or changes in time are not reflected. Second, sound effects, dialogues, and music are produced in one track in the same way as the existing sound production method. This method has the advantage of not having to go through the coding step. Finally, the content is completed by applying the sound composed of one track to the video.

2.2 Computer-based engineering

The computer-based interactive sound engineering method proceeds in a total of 4 steps. This method is different from the sound production method used in the existing music industry. There are parts similar existing sound-based engineering methods, but the biggest feature is that sound effects, dialogue, and music are produced in different files. The detailed method is shown in Figure 2.

E1CTBR_2020_v16n4_68_f0002.png 이미지

Figure 2. computer-based interactive sound engineering method

In other uses of the computer-based interactive sound engineering method, the storyboard must be different from the conventional method. According to a study, the storyboard of VR content should have components such as 'main camera angle, screen scale, scene change, actor's direction of movement, background and set composition, dialogue, music/sound, special effects, lighting' [10]. In this way, the computer-based interactive sound engineering method produces a video using a storyboard that into account the audience's interaction. Due to this, VR contents can increase the sense of immersion by utilizing the advantage of free user interaction, but the directing and sound production methods are inevitably changed in order to draw attention as the director intended.

First, it directs sound effects, dialogues, and music according to the storyboard considering the audience's interaction. It should be directed by considering the user's interaction and the composition of external and internal sounds. Secondly, unlike existing sound production methods, sound effects, dialogue, and music are produced in different files. In the third step, the different files produced in this way are coded by the programmer according to each interaction and story progression based on the storyboard. Finally, the content is completed by rendering the coded file.

3. Proposal of a Timeline-Based Sound Engineering Methodology Through VR Content “A Midsummer Night's Palace”

“A Midsummer Night's Palace” is a work that won the excellence award in 2019 VRound hosted by the Korea Creative Content Agency. The plot of this particular VR content is about five traditional characters from Korean Mythology who work together to rescue Gyeongbokgung Palace from a fire crisis. Those five characters are Jujak, Baekho, Hyunmu, Blue Dragon and Haitai. This content is an important feature to make the user to follow up the storyline, without much interactive content like those made for games. The interaction of this content is relatively more simple than interaction elements of those that appear in VR games because the content flows in the narrative form.

3.1 Timeline-based sound engineering methodology

As mentioned earlier, different methodologies have been used to produce the sound of VR contents: “sound-based engineering” and “computer-based interactive sound engineering”. Sound-based engineering has the advantage of preventing noise cancellation by utilizing various technologies accumulated as a method used in conventional sound production. Computer-based interactive sound engineering has the advantage of being able to produce sound by considering user interaction and external and internal screens.

E1CTBR_2020_v16n4_68_f0003.png 이미지

Figure 3. Proposal of sound directing methodology

Therefore, in this study, the following methods were devised and utilized to take advantage of each sound production technology. First, the storyboard is analyzed according to the user's interaction. It analyzes the necessary sound effects, dialogue, and music according to the user's interaction. And second, the analyzed sound should be classified into 'simultaneous time sound' and 'individual time sound', which can be seen as the core of this methodology. In this case, the ‘simultaneous time sound’ refers to a harmony of sounds that dialogues, background music, and multiple sound effects outputs at the same time and on the other hand, the ‘Individual time sound’ is outputs of single sound which selected one from either dialogue, background music or sound effects, it solely depends on the time using by individuals.

To summarize, the sound required for the content is classified into 'same time sound' and 'individual time sound', and then the sound effects, dialogue, and music required for each category are analyzed. After analysis, sound effects, script sound, and background music. are produced according to each category. Thirdly, we work on interaction coding for sound effects, dialogue, and music produced in the same time sound and individual time sound categories. Finally, the produced sound is applied to the video to complete the content. By doing this, you can reduce the noise canceling phenomenon that occurs when only the'computer-based interactive sound engineering' technology is used. In addition, since coding is performed according to the user's interaction, it is possible to increase the user's sense of immersion by producing a sound that takes into account the user's interaction, unlike when using only'sound-based engineering' technology. 'Simultaneous sound' and 'individual time sound', which are the core concepts of this sound production methodology. The following is a more detailed description of ‘effect sound’, ‘subject’, and ‘music’.

3.2 Simultaneous sound

Simultaneous sound refers to a sound in which various sound effects, music, and dialogue must be harmoniously output at the same time. The “A Midsummer Night's Palace”, the same time sound appears in various scenes such as the scene where the user extinguishes a fire in ‘Gyeongbokgung Palace' and the scene where a monkey character appears and enters the well asking the user to travel to Gyeongbokgung Palace'. In more detail, since the fire is extinguished with a fan, the sound effect of blowing the wind, the urgent background music, and the dialogue of the monkey character appears at the same time.

If the simultaneous sound is not working using sound engineering technology, each sound effect, music, dialogue, etc. may collide and noise canceling may occur. Noise canceling occurs because each sound, such as sound effects, music, and dialogue, has a frequency. That is, sound producers should consider each frequency and allocate frequencies so that sound effects, script sound, and background music. do not collide with each other. Of course, the frequency range can be arbitrarily set even by using a computer-based interactive sound engineering method. However, if the frequency range is set for the finished song, the instrument or specific sounds selected in the production stage may not be output as intended by the sound producer.

In this case, the composer cannot convey his or her intentions through the sound, and the user hears the sound with a strong sense of heterogeneity. Therefore, in the sound production process of “A Midsummer Night's Palace” I thought that noise canceling could not be prevented in the coding process, so the frequencies of the music, sound effects, and script sound that appear in the same time zone were clearly distributed. In other prevent frequency collision of each music, dialogue, and sound effect during the output process by coding, frequencies are allocated at the stage of sound production and composition. When sound was produced in this way, it was determined that the probability of noise canceling phenomenon could be significantly reduced compared to the existing computer engineering sound when sound output through coding. In other words, it does not artificially allocate frequencies for existing songs, but distributes frequencies in the production and composition stages to produce sound. This method has the advantage that it prevents the user from giving a sense of disparity in terms of sound, and that the composer's existing intention to express is not transformed and delivered entirely through the content.

3.3 Individual time sound

Individual time sound means a sound in which only one of sound effects, music, and dialogue should be output at the same time. In “A Midsummer Night's Palace”, individual time sounds were needed in 'A scene where users are evaluated by Haitai whether they can extinguish a fire' and A scene in which each four-way god appears'. In 'A scene where users are evaluated by Haitai whether they can extinguish a fire', the sound effect of moving the scale was produced as an individual time sound. In addition, the sound effect of opening the stone door appears as an individual time sound in the'scene where each four-way god appears'. Due to the nature of “A Midsummer Night's Palace”, the effect sound produced by separate programming mainly appears as a separate time sound.

Since only one sound effect, dialogue, and music is output at the same time, the individual time sound does not collide with frequencies like the same time sound. You don't have to worry about the song noise canceling phenomenon. Therefore, this researcher used 'sound-based engineering technology' with many accumulated technologies that can deliver rich sounds to users in individual time sound production. This is because 'sound-based engineering technology' is mainly used in the existing sound production method, and a lot of know-how from various composers has been accumulated. This method has the advantage of providing a familiar sound to the user.

3.4 Background Music

Background music is important in that it must continuously appear within the content and at the same time convey the atmosphere of the content. Also, considering that the background music is a sound at the same time, the sound was produced in consideration of the frequency collision with other sound.

In the “A Midsummer Night's Palace”, a cheerful and bright background music was used for the purpose of delivering a positive image of the place called Gyeongbokgung Palace. In addition, a magnificent and urgent concept of background music was used to revive the atmosphere of the storyline to extinguish the fire. To create an upbeat background music, 7 instruments such as salt, gayageum, acoustic guitar, shaker, piano, string, and upright bass were used to create a lively background music. In order to preserve the background of Gyeongbokgung Palace, Korean traditional instruments salt and gayageum were selected to give a traditional Korean feel. At this time, the tone of the instrument was created so that the frequencies of 10000Hz ~ 20000Hz were emphasized. In addition, the melody used the melody of Taepyeong, a traditional Korean folk song.

The result of sound analysis with the logic program is shown in Table 1. Through the picture on the upper left, you can grasp the size of the frequency at which the sound is output. In particular, if you look at the frequency picture at the bottom left, you can see that the frequency sound curve of 2000~5000Hz is the lowest. This means that the sound within this frequency is the least output, meaning that if you place sound effects and dialogue within this frequency, you can output the sound harmoniously. In addition, you can check the sound resolution felt by VR animation users through the picture on the upper right. It can be confirmed that the sound resolution that is felt positively to VR animation users is produced through the appearance that green is appropriately spread rather than a shape that fills a straight line or a semicircle.

Table 1. Background music analysis using logic program

E1CTBR_2020_v16n4_68_t0001.png 이미지

Second, in the “Midsummer Night's Palace”, the background music with a magnificent atmosphere was produced with four percussion instruments: string, samul drum, orchestra drum, bass drum, and janggu. Traditional Korean musical instruments called Samulbuk and Janggu were chosen to bring out the traditional feeling of Korea. At this time, the tone of the instrument was set so that frequencies between 500Hz and 1000Hz were emphasized. Through the picture on the upper left, you can grasp the size of the frequency at which the sound is output. In particular, if you look at the frequency diagram on the left, it can be seen that the frequency sound curve of 0Hz~500Hz is the largest, and frequencies after 2000Hz are hardly output. This is to create a strong and magnificent feeling by emphasizing only the frequencies in the low frequency band.

3.5 Script sound

A script sound is a word spoken by a character and plays a role in conveying the character's personality or animation story. Therefore, the dialogue was determined to be the sound that should have the strongest delivery power to VR content users [11]. Two types of lines appear in “A Palace of Midsummer Night”. it is 'the dialogue within the same time sound'. For example, when a monkey character introduces a character, story, etc. to the user, sound effect, background music, and dialogue should all be output. However, there is also a section where only the dialogue comes out within individual time. For example, in a scene explaining that a user can use a fan within the content by shaking his right/left hand, only the dialogue appears.

The dialogue of VR contents should be produced differently depending on whether it is 'the dialogue within the same time sound' or 'the dialogue within the individual time sound'. In the case of 'same time sound dialogue', the background music, the possibility of collision of the sound effect frequency, and the user's interaction must all be considered. Therefore, this researcher takes the “concurrent sound metabolism” based on the human ear's resonance frequency within 2500Hz ~ 2800Hz [12]. At the same time, it was produced with a frequency within 1000Hz ~ 3000Hz, which does not overlap with sound effects and background music. Within this frequency, I produced a ‘concurrent sound dialogue’.

3.6 Sound Effect

The characteristic of the sound effect used in “A Palace of Midsummer Night” is that the playback time is composed of short sounds ranging from 1 to 10 seconds. The sound effect is short, but it should be conveyed most clearly to the user [13]. “A Palace of Midsummer Night”, the sound of burning, the sound of the wind, and the sound of the collapse of Gyeongbokgung appeared as sound effects to convey the fire scene realistically.

The sound effect should have the largest volume among sounds of other categories, and when the sound effect is played, it does not matter if other sounds are not heard. When producing the sound at the same time, the method chosen in consideration of the coding stage was made to be about 1.5 times larger than the volume size of background music and dialogue, freeing from noise canceling and making the delivery of sound effects more clear. Focusing on the phenomenon that when a strong sound is generated among ordinary sounds, people cannot hear other sounds, the difference is made only by the loudness of the sound without distribution of frequencies.

4. Comparison of Three Sound Production Methodologies

Table 2 is a comparison of the two existing sound production methodologies using a logic program. When analyzing “boneworks”, a representative VR game of computer-based sound engineering, with a logic program, if you look at the figure and graph on the left, the output of frequencies above 500Hz is very low, and only frequencies below 500Hz are output together. This means that the sound is not output spatially and the sound resolution is very low. In particular, when the sound resolution is degraded, only low-frequency sound is output, so the user feels full or clogged. Recognize that hearing loss occurs like a phenomenon like. However, when an actual user plays, it can be that the position of the sound effect of the ball changes every time the user moves the ball in the scene. In the end, when the computer-based engineering method is used, the immersion of VR users is increased through a clear sound location, but the irony of reducing the immersion can occur as hearing loss is felt due to a decrease in sound resolution.

Table 2. Analysis table of each methodology

E1CTBR_2020_v16n4_68_t0002.png 이미지

Second, when analyzing the VR animation “invasions!” using the sound-based engineering method with a logic program, all frequencies are output evenly and the sound resolution is very high when you see that spatial sound is output. There is an advantage. However, when an actual user watches the animation, it can be seen that the sound effect is output from the left, unlike the spacecraft appearing on the right in [Table 3]. That is, although the sound-based engineering method can have a high sound resolution, it has a disadvantage that it cannot change the sound according to the user's interaction.

Table 3. Timeline-based sound analysis table

E1CTBR_2020_v16n4_68_t0003.png 이미지

Table 3 is a comparison of the timeline-based sound engineering methodology proposed in this study using a logic program. The picture on the left is the picture set to output the background music and sound effect from the left and the analysis data of the corresponding sound, and the picture on the right is the picture set to be output from the right and analysis data of the corresponding sound. Comprehensively looking at the two pictures on the left, it shows that even when the background music and sound effects are set to be output from the left, not only the left green bar, which means volume, is higher than the right, and the green dots in the half circle sound spatially. In addition, it can be seen that all frequencies are output evenly as the pictures related to frequency are evenly displayed in dark green.

In addition, when looking at the two figures on the right together, it can be seen that the sound is output evenly in a state in which the position set to the right is reflected in the same manner. Therefore, if the timeline- based engineering method is used, sound can be output in a state in which the position change is clearly reflected even if the background music and the effect sound with different positions are simultaneously output. In addition, good sound resolution can be maintained without noise canceling. In conclusion, timeline-based sound engineering can enhance the user's immersion by maintaining the resolution of the sound even when the background music and the sound whose position is changed are simultaneously output, while at the same time trying to change the position of the sound according to user interaction.

5. Conclusion

The reason why VR technology is in the spotlight in the recent cultural contents market seems to be because of the feature that it can achieve the same effect as reality despite being virtual. Thus various studies aimed at enhancing the “immersion” of such VR contents so that users feel virtual spaces like reality, are more focused on the fields of science and technology and sight. Moreover, research on VR sound is difficult because it is a work that requires expert opinions in two fields: sound engineering and computer-based interactive sound engineering. Due to this difficulty, the main characteristic of VR contents is that the user's interaction and time and space are reflected, but in terms of sound, these characteristics are often unutilized. Therefore, in this study, a timeline-based sound production methodology was proposed that simultaneously considered computer-based engineering sound and sound-based engineering sound through the VR content “A Midsummer Night's Palace” in which the researcher participated. First, the storyboard was analyzed according to the user's interaction, and the sound effect, dialogue, and music were analyzed. Secondly, the analyzed sound was classified into 'simultaneous time sound' and 'individual time sound'. It was classified into 'simultaneous sound', in which multiple sound effects, dialogues, and music were output in harmony at the same time, and 'individual time sound', in which one sound effect, dialogue, and music was outputted into one time zone.

The sound effects, dialogue, and music necessary for the “simultaneous time sound” and “individual time sound” classified in this way were analyzed and produced for each category. After that, thirdly, I worked on interaction coding for sound effects, dialogue, and music produced in the same time sound and individual time sound categories. Finally, the produced sound was applied to the video to complete the content. When sound was produced in this method, each of the advantages were able to be confirmed. First, there was an advantage that the sound resolution of 'individual time sound' can be greatly increased. Since individual time sounds only output one soundtrack at a specific time, the resolution decreased due to overlapping with other sounds, that is, noise canceling did not occur. Therefore, it was concluded that sound producers can increase the resolution of the sound by using existing sound-based engineering techniques without considering collisions with other sounds when producing individual temporal sounds. If the individual time frames were not separated and only one method, computer-based interactive sound engineering technology, was used, the sound producer would have had to work with the possibility of collisions with other sounds in his or her mind. However, if the method proposed by this researcher is used, it is possible to take advantage of increasing the sound resolution by applying the existing sound engineering technology to the individual time sounds divided by time. Second, there is an advantage that the sound producer can deliver various sounds intended by the director without collision between sounds through frequency classification considering the timeline.

In general, when only computer-based interactive sound engineering technology was applied in the existing VR sound production, sound producers created each individual file with the best resolution for multiple sounds appearing at the same time regardless of the timeline. The output file was coded by the engineer according to the timeline in the storyboard. This method has the disadvantage of causing collisions between sounds while the files with the highest resolution appear in the content at the same time. If the methodology proposed in this study is used, there is an advantage that sound effects, background music, and dialogue can appear as intended by the creator even if they appear at the same time. That is, if a sound producer creates sounds after adding a ‘classification according to timeline' based on the overall storyboard, the information of the sound produced by the creator is not transformed due to frequency collision, and the information contained in the sound is completely delivered to the user. You can get the effect.

Compared to the existing method, this methodology also has parts to be supplemented. First, in the absence of “smooth communication”, this methodology can be largely meaningless. This is because even if you start with the storyboard set, some changes can be made during the production process. In order for sound producers to smoothly classify frequencies according to the timeline based on storyboards, smooth communication with engineers and story writers participating in content production is essential for such changes in the production process. In fact, in the scene where the protagonist is evaluated for his qualities during the production process of “A Midsummer Night's Palace”, the sound producer produced the background music, sound effects, and dialogue simultaneously based on the received storyboard. One of the sounds that was output by classifying frequencies based on three categories was applied to the content, but the sound quality was regrettable. Therefore in order to use this method, continuous communication with various people participating in the content production process is essential. Second, the VR user's hardware must have a certain quality or higher. The “A Midsummer Night's Palace” was demonstrated at M Contemporary in Yeoksam-dong, Seoul, where sound had to be output through popular speakers. Popular speakers are usually two to three inches in size and have a disadvantage when sounds as small as below 1000 Hz are output that it is difficult for users to hear. No matter how much high-quality sound producers are produced, users need sufficient hardware. This study is meaningful in that it proposed a new methodology that utilizes the 'classification of timeline based on storyboards' in research on the production of VR content sound, which is relatively insufficient. Using these characteristics, this researcher intends to define this methodology as 'Timeline Sound Engineering'. In the future, we will continue to supplement the limitations of this method through follow-up studies.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. W. R. Noh, "Perception' with VR in the music industry...'Holding of One Direction's first VR concert," Accessed: Apr. 8, 2019. [Online] Available: http://www.aitimes.com/news/articleView.html?idxno=48139
  2. T. J. Kim, "KT will release 5G-equipped VR next year," Accessed: Apr. 16, 2014. [Online] Available: Jul. 01, 2019. [Online] Available: https://zdnet.co.kr/view/?no=20190701155323
  3. J. Steuer, "Defining Virtual Reality: Dimensions Determining Telepresence," Journal of Communication, vol. 42, no. 4, pp. 73-93, Dec. 1992, doi: https://doi.org/10.1111/j.1460-2466.1992.tb00812.x.
  4. Y. J. KIM, "Optical Review of VR HMD Image Immersion," M.S. thesis, Dept. Formative arts., Chung-ang Univ, Heukseok, Seoul, Republic of Korea, 2018.
  5. W. E. Ha, "Space Production for Visual Guidance in VR Animation," M.S thesis, Dept. Film Arts-Animation Making, Chung-Ang University, Seoul, Republic of Korea, 2019.
  6. H. N. Kim, G. M. Hong, and H. J. Lee, "Interactive virtual reality sound output method," presented at PROCEEDING OF HIC KOREA 2019, Seoul, Republic of Korea, Feb. 13-16, 2019. [Online] Available: http://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE08008074
  7. S. K. Kang, "Catch noise with sound... The era of noise cancellation," Accessed: Apr. 16, 2014. [Online] Available: http://dongascience.donga.com/news.php?idx=32588
  8. J. H. Lee and J. Y. Kang, "Redefine from the Consumer's Perspective of VR Films," Manga Animation Research, no. 57, pp. 521-543, Dec. 2019, doi: 10.7230/KOSCAS.2019.57.521.
  9. J. H. Choi, M. G. Hwang, and P. G,Kim, "Design and Implementation of a Digital Storyboard System for Efficient Video Contents Production," presented at Spring General Conference A, Republic of Korea, May 30 2008. [Online] Available: https://scienceon.kisti.re.kr/srch/selectPORSrchArticle.do?cn=NPAP08297801&dbt=NPAP
  10. K. B. Park, Virtual reality: augment reality and VRML, 21C publishing, pp. 13-14, 2012.
  11. J. Y. Kim, "Study of storyboard template for VR vide film," presented at International conference Proceedings of the Association, Gwangju, Republic of Korea, Jun. 24-25, 2016. [Online] Available: http://www.dbpia.co.kr.proxy.cau.ac.kr/journal/articleDetail?nodeId=NODE06700268
  12. H. S. Jung, H. A. Gu, S. M. Lee, S. K. Gu, S. H. Lee, and T. H. Ryu, "Changes in Resonance Frequency and Length of External Auditory Canal in Relation to Age," Korean Journal of Otolaryngology-Head and Neck Surgery, vol. 44, no. 2, pp. 144-147, Jan. 2000, doi: http://www.kjorl.org/journal/view.php?number=3159.
  13. Y .J. Bang and G. Y. Noh, "The Effects of Sound Presence on User Experience and Brain Activity Pattern in Digital Game," Korean Society for Journalism and Communication Studies, vol. 59, no. 3, pp. 157-182, Jun. 2015, doi: https://www.dbpia.co.kr/Journal/articleDetail?nodeId=NODE06383680