Joint Image Dehazing and Super-Resolution: Closed Shared Source Residual Attention Fusion Network

In real world, due to the existence of floating particles such as smoke and dust in the atmosphere, images taken by camera are susceptible to different levels of blurring, low contrast, color distortion and visual degradation, which would be amplified when enlarging the image resolution. Therefore,...

Full description

Bibliographic Details
Main Authors: Zhuoyuan Yang, Da Pan, Ping Shi
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9497083/
Description
Summary:In real world, due to the existence of floating particles such as smoke and dust in the atmosphere, images taken by camera are susceptible to different levels of blurring, low contrast, color distortion and visual degradation, which would be amplified when enlarging the image resolution. Therefore, it is a new trend to join the image dehazing and image super-resolution tasks. To generate sharp high-resolution images from low-resolution images with severe haze, a common way is to connect the dehazing network and the super-resolution network in series. However, two-stage joint approach easily introduces blurring artifacts and is time-consuming. In addition, although there are a few existing one-stage methods, their training modes are relatively complicated and the restoration effects on some texture details are still blurry. In this paper, we focus on exploring one-stage joint model and propose a back-projection network based on shared source attention fusion (BPSAF), which forms a closed frame through back-projection mechanism. BPSAF can remove non-uniform haze and extend the resolution simultaneously. Specifically, a shared source attention fusion (SAF) module is presented to fuse high-frequency information of different level features using shared source skip connections more effectively, which filters out abundant thin haze and low-frequency information from merged images. To enhance the definition of restored images, a feedback error correction module based on error attention mechanism (FEC-EA) is designed to further correct distorted texture details by eliminating the feedback error from initial super-resolution result to the input hazy image. Experimental results demonstrate that our back-projection framework is superior to other existing methods in terms of quantitative indicators and visual quality.
ISSN:2169-3536