Building Extraction in Very High Resolution Imagery by Dense-Attention Networks

Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have...

Full description

Bibliographic Details
Main Authors: Hui Yang, Penghai Wu, Xuedong Yao, Yanlan Wu, Biao Wang, Yongyang Xu
Format: Article
Language:English
Published: MDPI AG 2018-11-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/10/11/1768
Description
Summary:Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder&#8315;decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red&#8315;green&#8315;blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (<i>OA)</i>, 92.56% <i>F</i>1 score, 90.56% mean intersection over union (<i>MIOU)</i>, less training and response time and higher-quality value) when compared with other deep learning methods.
ISSN:2072-4292