Towards a Perceptual Loss: Using a Neural Network Codec Approximation as a Loss for Generative Audio Models

© 2019 Association for Computing Machinery. Generative audio models based on neural networks have led to considerable improvements across fields including speech enhancement, source separation, and text-to-speech synthesis. These systems are typically trained in a supervised fashion using simple ele...

Full description

Bibliographic Details
Main Authors: Ananthabhotla, Ishwarya (Author), Ewert, Sebastian (Author), Paradiso, Joseph A (Author)
Format: Article
Language:English
Published: Association for Computing Machinery (ACM), 2021-11-02T17:08:23Z.
Subjects:
Online Access:Get fulltext
LEADER 02177 am a22001813u 4500
001 137115
042 |a dc 
100 1 0 |a Ananthabhotla, Ishwarya  |e author 
700 1 0 |a Ewert, Sebastian  |e author 
700 1 0 |a Paradiso, Joseph A  |e author 
245 0 0 |a Towards a Perceptual Loss: Using a Neural Network Codec Approximation as a Loss for Generative Audio Models 
260 |b Association for Computing Machinery (ACM),   |c 2021-11-02T17:08:23Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/137115 
520 |a © 2019 Association for Computing Machinery. Generative audio models based on neural networks have led to considerable improvements across fields including speech enhancement, source separation, and text-to-speech synthesis. These systems are typically trained in a supervised fashion using simple element-wise ℓ1 or ℓ2 losses. However, because they do not capture properties of the human auditory system, such losses encourage modelling perceptually meaningless aspects of the output, wasting capacity and limiting performance. Additionally, while adversarial models have been employed to encourage outputs that are statistically indistinguishable from ground truth and have resulted in improvements in this regard, such losses do not need to explicitly model perception as their task; furthermore, training adversarial networks remains an unstable and slow process. In this work, we investigate an idea fundamentally rooted in psychoacoustics. We train a neural network to emulate an MP3 codec as a differentiable function. Feeding the output of a generative model through this MP3 function, we remove signal components that are perceptually irrelevant before computing a loss. To further stabilize gradient propagation, we employ intermediate layer outputs to define our loss, as found useful in image domain methods. Our experiments using an autoencoding task show an improvement over standard losses in listening tests, indicating the potential of psychoacoustically motivated models for audio generation. 
546 |a en 
655 7 |a Article 
773 |t 10.1145/3343031.3351148 
773 |t MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia