Recurrent Conditional Generative Adversarial Network for Image Deblurring

Nowadays, there is an increasing demand for images with high definition and fine textures, but images captured in natural scenes usually suffer from complicated blurry artifacts, caused mostly by object motion or camera shaking. Since these annoying artifacts greatly decrease image visual quality, d...

Full description

Bibliographic Details
Main Authors: Jing Liu, Wanning Sun, Mengjie Li
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8585006/
id doaj-6982d1b2c1a446969138e33b115d1f5c
record_format Article
spelling doaj-6982d1b2c1a446969138e33b115d1f5c2021-03-29T22:07:01ZengIEEEIEEE Access2169-35362019-01-0176186619310.1109/ACCESS.2018.28888858585006Recurrent Conditional Generative Adversarial Network for Image DeblurringJing Liu0https://orcid.org/0000-0003-4690-1886Wanning Sun1Mengjie Li2School of Electrical and Information Engineering, Tianjin University, Tianjin, ChinaSchool of Electrical and Information Engineering, Tianjin University, Tianjin, ChinaSchool of Electrical and Information Engineering, Tianjin University, Tianjin, ChinaNowadays, there is an increasing demand for images with high definition and fine textures, but images captured in natural scenes usually suffer from complicated blurry artifacts, caused mostly by object motion or camera shaking. Since these annoying artifacts greatly decrease image visual quality, deblurring algorithms have been proposed from various aspects. However, most energy-optimization-based algorithms rely heavily on blur kernel priors, and some learning-based methods either adopt pixel-wise loss function or ignore global structural information. Therefore, we propose an image deblurring algorithm based on a recurrent conditional generative adversarial network (RCGAN), in which the scale-recurrent generator extracts sequence spatio–temporal features and reconstructs sharp images in a coarse-to-fine scheme. To thoroughly evaluate the global and local generator performance, we further propose a receptive field recurrent discriminator. Besides, the discriminator takes blurry images as conditions, which helps to differentiate reconstructed images from real sharp ones. Last but not least, since the gradients are vanishing when training the generator with the output of the discriminator, a progressive loss function is proposed to enhance the gradients in back propagation and to take full advantage of discriminative features. Extensive experiments prove the superiority of RCGAN over state-of-the-art algorithms both qualitatively and quantitatively.https://ieeexplore.ieee.org/document/8585006/Image deblurringconditional generative adversarial networkreceptive field recurrentcoarse-to-fine
collection DOAJ
language English
format Article
sources DOAJ
author Jing Liu
Wanning Sun
Mengjie Li
spellingShingle Jing Liu
Wanning Sun
Mengjie Li
Recurrent Conditional Generative Adversarial Network for Image Deblurring
IEEE Access
Image deblurring
conditional generative adversarial network
receptive field recurrent
coarse-to-fine
author_facet Jing Liu
Wanning Sun
Mengjie Li
author_sort Jing Liu
title Recurrent Conditional Generative Adversarial Network for Image Deblurring
title_short Recurrent Conditional Generative Adversarial Network for Image Deblurring
title_full Recurrent Conditional Generative Adversarial Network for Image Deblurring
title_fullStr Recurrent Conditional Generative Adversarial Network for Image Deblurring
title_full_unstemmed Recurrent Conditional Generative Adversarial Network for Image Deblurring
title_sort recurrent conditional generative adversarial network for image deblurring
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2019-01-01
description Nowadays, there is an increasing demand for images with high definition and fine textures, but images captured in natural scenes usually suffer from complicated blurry artifacts, caused mostly by object motion or camera shaking. Since these annoying artifacts greatly decrease image visual quality, deblurring algorithms have been proposed from various aspects. However, most energy-optimization-based algorithms rely heavily on blur kernel priors, and some learning-based methods either adopt pixel-wise loss function or ignore global structural information. Therefore, we propose an image deblurring algorithm based on a recurrent conditional generative adversarial network (RCGAN), in which the scale-recurrent generator extracts sequence spatio–temporal features and reconstructs sharp images in a coarse-to-fine scheme. To thoroughly evaluate the global and local generator performance, we further propose a receptive field recurrent discriminator. Besides, the discriminator takes blurry images as conditions, which helps to differentiate reconstructed images from real sharp ones. Last but not least, since the gradients are vanishing when training the generator with the output of the discriminator, a progressive loss function is proposed to enhance the gradients in back propagation and to take full advantage of discriminative features. Extensive experiments prove the superiority of RCGAN over state-of-the-art algorithms both qualitatively and quantitatively.
topic Image deblurring
conditional generative adversarial network
receptive field recurrent
coarse-to-fine
url https://ieeexplore.ieee.org/document/8585006/
work_keys_str_mv AT jingliu recurrentconditionalgenerativeadversarialnetworkforimagedeblurring
AT wanningsun recurrentconditionalgenerativeadversarialnetworkforimagedeblurring
AT mengjieli recurrentconditionalgenerativeadversarialnetworkforimagedeblurring
_version_ 1724192173657686016