Constrained Generative Adversarial Networks

Generative Adversarial Networks (GANs) are a powerful subclass of generative models. Yet, how to effectively train them to reach Nash equilibrium is a challenge. A number of experiments have indicated that one possible solution is to bound the function space of the discriminator. In practice, when o...

Full description

Bibliographic Details
Main Authors: Xiaopeng Chao, Jiangzhong Cao, Yuqin Lu, Qingyun Dai, Shangsong Liang
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9335934/
Description
Summary:Generative Adversarial Networks (GANs) are a powerful subclass of generative models. Yet, how to effectively train them to reach Nash equilibrium is a challenge. A number of experiments have indicated that one possible solution is to bound the function space of the discriminator. In practice, when optimizing the standard loss function without limiting the discriminator's output, the discriminator may suffer from lack of convergence. To be able to reach the Nash equilibrium in a faster way during training and obtain better generative data, we propose constrained generative adversarial networks, GAN-C, where a constraint on the discriminator's output is introduced. We theoretically prove that our proposed loss function shares the same Nash equilibrium as the standard one, and our experiments on mixture of Gaussians, MNIST, CIFAR-10, STL-10, FFHQ, and CAT datasets show that our loss function can better stabilize training and yield even better high-quality images.
ISSN:2169-3536