Reconstruction of Generative Adversarial Networks in Cross Modal Image Generation with Canonical Polyadic Decomposition
Generating pictures from text is an interesting, classic, and challenging task. Benefited from the development of generative adversarial networks (GAN), the generation quality of this task has been greatly improved. Many excellent cross modal GAN models have been put forward. These models add extens...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi-Wiley
2021-01-01
|
Series: | Wireless Communications and Mobile Computing |
Online Access: | http://dx.doi.org/10.1155/2021/8868781 |
id |
doaj-0d362a9551ce4720bf7748119a7dd7da |
---|---|
record_format |
Article |
spelling |
doaj-0d362a9551ce4720bf7748119a7dd7da2021-04-19T00:05:01ZengHindawi-WileyWireless Communications and Mobile Computing1530-86772021-01-01202110.1155/2021/8868781Reconstruction of Generative Adversarial Networks in Cross Modal Image Generation with Canonical Polyadic DecompositionRuixin Ma0Junying Lou1Peng Li2Jing Gao3School of SoftwareSchool of SoftwareSchool of SoftwareSchool of SoftwareGenerating pictures from text is an interesting, classic, and challenging task. Benefited from the development of generative adversarial networks (GAN), the generation quality of this task has been greatly improved. Many excellent cross modal GAN models have been put forward. These models add extensive layers and constraints to get impressive generation pictures. However, complexity and computation of existing cross modal GANs are too high to be deployed in mobile terminal. To solve this problem, this paper designs a compact cross modal GAN based on canonical polyadic decomposition. We replace an original convolution layer with three small convolution layers and use an autoencoder to stabilize and speed up training. The experimental results show that our model achieves 20% times of compression in both parameters and FLOPs without loss of quality on generated images.http://dx.doi.org/10.1155/2021/8868781 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ruixin Ma Junying Lou Peng Li Jing Gao |
spellingShingle |
Ruixin Ma Junying Lou Peng Li Jing Gao Reconstruction of Generative Adversarial Networks in Cross Modal Image Generation with Canonical Polyadic Decomposition Wireless Communications and Mobile Computing |
author_facet |
Ruixin Ma Junying Lou Peng Li Jing Gao |
author_sort |
Ruixin Ma |
title |
Reconstruction of Generative Adversarial Networks in Cross Modal Image Generation with Canonical Polyadic Decomposition |
title_short |
Reconstruction of Generative Adversarial Networks in Cross Modal Image Generation with Canonical Polyadic Decomposition |
title_full |
Reconstruction of Generative Adversarial Networks in Cross Modal Image Generation with Canonical Polyadic Decomposition |
title_fullStr |
Reconstruction of Generative Adversarial Networks in Cross Modal Image Generation with Canonical Polyadic Decomposition |
title_full_unstemmed |
Reconstruction of Generative Adversarial Networks in Cross Modal Image Generation with Canonical Polyadic Decomposition |
title_sort |
reconstruction of generative adversarial networks in cross modal image generation with canonical polyadic decomposition |
publisher |
Hindawi-Wiley |
series |
Wireless Communications and Mobile Computing |
issn |
1530-8677 |
publishDate |
2021-01-01 |
description |
Generating pictures from text is an interesting, classic, and challenging task. Benefited from the development of generative adversarial networks (GAN), the generation quality of this task has been greatly improved. Many excellent cross modal GAN models have been put forward. These models add extensive layers and constraints to get impressive generation pictures. However, complexity and computation of existing cross modal GANs are too high to be deployed in mobile terminal. To solve this problem, this paper designs a compact cross modal GAN based on canonical polyadic decomposition. We replace an original convolution layer with three small convolution layers and use an autoencoder to stabilize and speed up training. The experimental results show that our model achieves 20% times of compression in both parameters and FLOPs without loss of quality on generated images. |
url |
http://dx.doi.org/10.1155/2021/8868781 |
work_keys_str_mv |
AT ruixinma reconstructionofgenerativeadversarialnetworksincrossmodalimagegenerationwithcanonicalpolyadicdecomposition AT junyinglou reconstructionofgenerativeadversarialnetworksincrossmodalimagegenerationwithcanonicalpolyadicdecomposition AT pengli reconstructionofgenerativeadversarialnetworksincrossmodalimagegenerationwithcanonicalpolyadicdecomposition AT jinggao reconstructionofgenerativeadversarialnetworksincrossmodalimagegenerationwithcanonicalpolyadicdecomposition |
_version_ |
1714674173535584256 |