An Object-based Audio Rendering System based on Parametric Coding

碩士 === 國立中央大學 === 通訊工程研究所 === 99 === Nowadays the multimedia applications of the 3D virtual reality are more and more popular. Although most applications focus on 3D video, the combination of 3D video and audio processing can enrich the experience of users. In this thesis, we propose an object-based...

Full description

Bibliographic Details
Main Authors: Kuo-lun Huang, 黃國倫
Other Authors: Pao-chi chang
Format: Others
Language:zh-TW
Published: 2011
Online Access:http://ndltd.ncl.edu.tw/handle/09435386948259977534
Description
Summary:碩士 === 國立中央大學 === 通訊工程研究所 === 99 === Nowadays the multimedia applications of the 3D virtual reality are more and more popular. Although most applications focus on 3D video, the combination of 3D video and audio processing can enrich the experience of users. In this thesis, we propose an object-based audio rendering system (OARS) for 3D applications, such as first person shooter (FPS) games. With the proposed system, users are able to locate the objects, whether it is static or in motion. Since the audio objects may be in remote sites that are connected over Internet in many applications, the bitrate reduction is still critical. In this work, the system consists of the audio analysis part and synthesis part. In the audio analysis part, we utilize the parametric coding technique to generate spatial parameters, which include the time difference and the intensity difference for an object and loudspeakers, for rate reduction while keeping the spatial information. In the audio synthesis part, we reconstruct multi-channel audio outputs by integrating an audio signal and the spatial parameters. We evaluate the system performance by analyzing the spectrum of processed audios and subjective listening tests. Based on the modified ITU-R seven-grade (-3 to 3) subjective quality evaluation, our proposed system scores 1.49 on average for static audio objects and 1.31 for moving objects.