Model-Based Deep Network for Single Image Deraining

For current learning-based single image deraining methods, deraining networks are usually designed based on a simplified linear additive rain model, which may not only cause unreal synthetic rainy images for both training and testing datasets, but also adversely affect the applicability and generali...

Full description

Bibliographic Details
Main Authors: Pengyue Li, Jiandong Tian, Yandong Tang, Guolin Wang, Chengdong Wu
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8955865/
Description
Summary:For current learning-based single image deraining methods, deraining networks are usually designed based on a simplified linear additive rain model, which may not only cause unreal synthetic rainy images for both training and testing datasets, but also adversely affect the applicability and generality of corresponding networks. In this paper, we use the screen blend model of Photoshop as the nonlinear rainy image decomposition model. Based on this model, we design a novel channel attention U-DenseNet for rain detection and a residual dense block for rain removal. The detection sub-network not only adjusts channel-wise feature responses by our novel channel attention block to pay more attention to learn the rain map, but also combines the context information with the precise localization by the U-DenseNet to promote pixel-wise estimation accuracy. After rain detection, we use the nonlinear model to get a coarse rain-free image, and then introduce a deraining refinement subnetwork consisted of the residual dense block to obtain a fine rain-free image. For training our network, we apply the nonlinear rain model to synthesize a benchmark dataset called as RITD. It contains 3200 triplets of rainy images, rain maps, and clean background images. Our extensive quantitative and qualitative experimental results show that our method outperforms several state-of-the-art methods on both synthetic and real images.
ISSN:2169-3536