Exploring Efficient Acceleration Architecture for Winograd-Transformed Transposed Convolution of GANs on FPGAs
The acceleration architecture of transposed convolution layers is essential since transposed convolution operations, as critical components in the generative model of generative adversarial networks, are computationally intensive inherently. In addition, the pre-processing of inserting and padding w...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-02-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/9/2/286 |
id |
doaj-99e41326e3644839b1215f4f7fde3ae5 |
---|---|
record_format |
Article |
spelling |
doaj-99e41326e3644839b1215f4f7fde3ae52020-11-25T01:12:28ZengMDPI AGElectronics2079-92922020-02-019228610.3390/electronics9020286electronics9020286Exploring Efficient Acceleration Architecture for Winograd-Transformed Transposed Convolution of GANs on FPGAsXinkai Di0Hai-Gang Yang1Yiping Jia2Zhihong Huang3Ning Mao4Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, ChinaAerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, ChinaAerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, ChinaAerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, ChinaAerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, ChinaThe acceleration architecture of transposed convolution layers is essential since transposed convolution operations, as critical components in the generative model of generative adversarial networks, are computationally intensive inherently. In addition, the pre-processing of inserting and padding with zeros for input feature maps causes many ineffective operations. Most of the already known FPGA (Field Programmable Gate Array) based architectures for convolution layers cannot tackle these issues. In this paper, we firstly propose a novel dataflow exploration through splitting the filters and its corresponding input feature maps into four sets and then applying the Winograd algorithm for fast processing with a high efficiency. Secondly, we present an underlying FPGA-based accelerator architecture that features owning processing units, with embedded parallel, pipelined, and buffered processing flow. At last, a parallelism-aware memory partition technique and the hardware-based design space are explored coordinating, respectively, for the required parallel operations and optimal design parameters. Experiments of several state-of-the-art GANs by our methods achieve an average performance of 639.2 GOPS on Xilinx ZCU102 and 162.5 GOPS on Xilinx VC706. In reference to a conventional optimized accelerator baseline, this work demonstrates an 8.6× (up to 11.7×) increase in processing performance, compared to below 2.2× improvement by the prior studies in the literature.https://www.mdpi.com/2079-9292/9/2/286generative adversarial networks (gans)transposed convolutionwinogradfpgaacceleration architectureprocessing units |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Xinkai Di Hai-Gang Yang Yiping Jia Zhihong Huang Ning Mao |
spellingShingle |
Xinkai Di Hai-Gang Yang Yiping Jia Zhihong Huang Ning Mao Exploring Efficient Acceleration Architecture for Winograd-Transformed Transposed Convolution of GANs on FPGAs Electronics generative adversarial networks (gans) transposed convolution winograd fpga acceleration architecture processing units |
author_facet |
Xinkai Di Hai-Gang Yang Yiping Jia Zhihong Huang Ning Mao |
author_sort |
Xinkai Di |
title |
Exploring Efficient Acceleration Architecture for Winograd-Transformed Transposed Convolution of GANs on FPGAs |
title_short |
Exploring Efficient Acceleration Architecture for Winograd-Transformed Transposed Convolution of GANs on FPGAs |
title_full |
Exploring Efficient Acceleration Architecture for Winograd-Transformed Transposed Convolution of GANs on FPGAs |
title_fullStr |
Exploring Efficient Acceleration Architecture for Winograd-Transformed Transposed Convolution of GANs on FPGAs |
title_full_unstemmed |
Exploring Efficient Acceleration Architecture for Winograd-Transformed Transposed Convolution of GANs on FPGAs |
title_sort |
exploring efficient acceleration architecture for winograd-transformed transposed convolution of gans on fpgas |
publisher |
MDPI AG |
series |
Electronics |
issn |
2079-9292 |
publishDate |
2020-02-01 |
description |
The acceleration architecture of transposed convolution layers is essential since transposed convolution operations, as critical components in the generative model of generative adversarial networks, are computationally intensive inherently. In addition, the pre-processing of inserting and padding with zeros for input feature maps causes many ineffective operations. Most of the already known FPGA (Field Programmable Gate Array) based architectures for convolution layers cannot tackle these issues. In this paper, we firstly propose a novel dataflow exploration through splitting the filters and its corresponding input feature maps into four sets and then applying the Winograd algorithm for fast processing with a high efficiency. Secondly, we present an underlying FPGA-based accelerator architecture that features owning processing units, with embedded parallel, pipelined, and buffered processing flow. At last, a parallelism-aware memory partition technique and the hardware-based design space are explored coordinating, respectively, for the required parallel operations and optimal design parameters. Experiments of several state-of-the-art GANs by our methods achieve an average performance of 639.2 GOPS on Xilinx ZCU102 and 162.5 GOPS on Xilinx VC706. In reference to a conventional optimized accelerator baseline, this work demonstrates an 8.6× (up to 11.7×) increase in processing performance, compared to below 2.2× improvement by the prior studies in the literature. |
topic |
generative adversarial networks (gans) transposed convolution winograd fpga acceleration architecture processing units |
url |
https://www.mdpi.com/2079-9292/9/2/286 |
work_keys_str_mv |
AT xinkaidi exploringefficientaccelerationarchitectureforwinogradtransformedtransposedconvolutionofgansonfpgas AT haigangyang exploringefficientaccelerationarchitectureforwinogradtransformedtransposedconvolutionofgansonfpgas AT yipingjia exploringefficientaccelerationarchitectureforwinogradtransformedtransposedconvolutionofgansonfpgas AT zhihonghuang exploringefficientaccelerationarchitectureforwinogradtransformedtransposedconvolutionofgansonfpgas AT ningmao exploringefficientaccelerationarchitectureforwinogradtransformedtransposedconvolutionofgansonfpgas |
_version_ |
1725166084903927808 |