Abstract:
|
Audio is a crucial aspect to bear in mind when designing virtual reality applications, as it can
add a whole new level of immersion to this kind of experiences if properly used. In order to
create realistic sound, it is essential to take audio spatialization into consideration, providing
the information necessary for an individual to estimate the position of sound sources and the
characteristics of surrounding spaces. This project proposes implementing spatial audio in
virtual reality scenes created with a game engine, as well as providing all of the theoretical
bases that explain how this can be ultimately achieved.
It first touches upon how the human auditory system is able to estimate the direction and
distance to an audio source by interpreting cues such as time and level differences between
ears, pinnae reflections, reverberation and general variations in loudness.
Next, the limited spatial properties present in the most common audio reproduction systems
are discussed, arguing why they are insufficient for virtual reality applications. Two spatial
audio recording and reproduction techniques for headphones and loudspeakers are
presented as alternatives for virtual reality scenarios in which the user remains static.
As a means of acquiring the knowledge necessary to understand more advanced spatial
audio systems, the concept known as Head Related Transfer Function or HRTF is
introduced in great detail. It is explained how HRTFs encompass all physical cues that
condition sound localization, as well as how the frequency responses that characterize them
can be experimentally measured and used for artificial spatialization of virtual sources.
Several HRTF-based spatial audio systems are presented, differentiating between those that
apply HRTFs as mathematical models and those that make use of experimental impulse
response data sets. These advanced models are the way to go if spatial audio is to be
applied to virtual reality experiences that involve user motion, as they are capable of
constantly adapting to the user’s position and direction relative to the present virtual sources.
The rest of the project focuses on how some of the mentioned HRTF-based spatial audio
systems can be implemented in the Unity game engine. The poor built-in spatialization
options the main software offers can be complemented and greatly improved with the use of
audio plugins that perform HRTF filtering and introduce features such as sound occlusion,
room simulation models and sound directivity patterns.
Three demos with different levels of complexity are finally carried out in Unity in order to
showcase the virtues of spatial audio in virtual reality applications. |