Constrained Generative Adversarial Networks

Generative Adversarial Networks (GANs) are a powerful subclass of generative models. Yet, how to effectively train them to reach Nash equilibrium is a challenge. A number of experiments have indicated that one possible solution is to bound the function space of the discriminator. In practice, when o...

Full description

Bibliographic Details
Main Authors: Xiaopeng Chao, Jiangzhong Cao, Yuqin Lu, Qingyun Dai, Shangsong Liang
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9335934/
id doaj-b1d4db230d804e86ae0a365f163a2562
record_format Article
spelling doaj-b1d4db230d804e86ae0a365f163a25622021-03-30T15:17:34ZengIEEEIEEE Access2169-35362021-01-019192081921810.1109/ACCESS.2021.30548229335934Constrained Generative Adversarial NetworksXiaopeng Chao0https://orcid.org/0000-0002-7494-8668Jiangzhong Cao1Yuqin Lu2Qingyun Dai3Shangsong Liang4https://orcid.org/0000-0003-1625-2168School of Information Engineering, Guangdong University of Technology, Guangzhou, ChinaSchool of Information Engineering, Guangdong University of Technology, Guangzhou, ChinaSchool of Information Engineering, Guangdong University of Technology, Guangzhou, ChinaSchool of Electronic and Information Engineering, Guangdong Polytechnic Normal University, Guangzhou, ChinaSchool of Computer Science and Technology, Sun Yat-sen University, Guangzhou, ChinaGenerative Adversarial Networks (GANs) are a powerful subclass of generative models. Yet, how to effectively train them to reach Nash equilibrium is a challenge. A number of experiments have indicated that one possible solution is to bound the function space of the discriminator. In practice, when optimizing the standard loss function without limiting the discriminator's output, the discriminator may suffer from lack of convergence. To be able to reach the Nash equilibrium in a faster way during training and obtain better generative data, we propose constrained generative adversarial networks, GAN-C, where a constraint on the discriminator's output is introduced. We theoretically prove that our proposed loss function shares the same Nash equilibrium as the standard one, and our experiments on mixture of Gaussians, MNIST, CIFAR-10, STL-10, FFHQ, and CAT datasets show that our loss function can better stabilize training and yield even better high-quality images.https://ieeexplore.ieee.org/document/9335934/Generative adversarial networksNash equilibriumLipschitz constraint
collection DOAJ
language English
format Article
sources DOAJ
author Xiaopeng Chao
Jiangzhong Cao
Yuqin Lu
Qingyun Dai
Shangsong Liang
spellingShingle Xiaopeng Chao
Jiangzhong Cao
Yuqin Lu
Qingyun Dai
Shangsong Liang
Constrained Generative Adversarial Networks
IEEE Access
Generative adversarial networks
Nash equilibrium
Lipschitz constraint
author_facet Xiaopeng Chao
Jiangzhong Cao
Yuqin Lu
Qingyun Dai
Shangsong Liang
author_sort Xiaopeng Chao
title Constrained Generative Adversarial Networks
title_short Constrained Generative Adversarial Networks
title_full Constrained Generative Adversarial Networks
title_fullStr Constrained Generative Adversarial Networks
title_full_unstemmed Constrained Generative Adversarial Networks
title_sort constrained generative adversarial networks
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2021-01-01
description Generative Adversarial Networks (GANs) are a powerful subclass of generative models. Yet, how to effectively train them to reach Nash equilibrium is a challenge. A number of experiments have indicated that one possible solution is to bound the function space of the discriminator. In practice, when optimizing the standard loss function without limiting the discriminator's output, the discriminator may suffer from lack of convergence. To be able to reach the Nash equilibrium in a faster way during training and obtain better generative data, we propose constrained generative adversarial networks, GAN-C, where a constraint on the discriminator's output is introduced. We theoretically prove that our proposed loss function shares the same Nash equilibrium as the standard one, and our experiments on mixture of Gaussians, MNIST, CIFAR-10, STL-10, FFHQ, and CAT datasets show that our loss function can better stabilize training and yield even better high-quality images.
topic Generative adversarial networks
Nash equilibrium
Lipschitz constraint
url https://ieeexplore.ieee.org/document/9335934/
work_keys_str_mv AT xiaopengchao constrainedgenerativeadversarialnetworks
AT jiangzhongcao constrainedgenerativeadversarialnetworks
AT yuqinlu constrainedgenerativeadversarialnetworks
AT qingyundai constrainedgenerativeadversarialnetworks
AT shangsongliang constrainedgenerativeadversarialnetworks
_version_ 1724179701791981568