An Adaptive Multiscale Fusion Network Based on Regional Attention for Remote Sensing Images

With the widespread application of semantic segmentation in remote sensing images with high-resolution, how to improve the accuracy of segmentation becomes a research goal in the remote sensing field. An innovative Fully Convolutional Network (FCN) is proposed based on regional attention for improvi...

Full description

Bibliographic Details
Main Authors: Wanzhen Lu, Longxue Liang, Xiaosuo Wu, Xiaoyu Wang, Jiali Cai
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9109552/
Description
Summary:With the widespread application of semantic segmentation in remote sensing images with high-resolution, how to improve the accuracy of segmentation becomes a research goal in the remote sensing field. An innovative Fully Convolutional Network (FCN) is proposed based on regional attention for improving the performance of the semantic segmentation framework for remote sensing images. The proposed network follows the encoder-decoder architecture of semantic segmentation and includes the following three strategies to improve segmentation accuracy. The enhanced GCN module is applied to capture the semantic features of remote sensing images. MGFM is proposed to capture different contexts by sampling at different densities. Furthermore, RAM is offered to assign large weights to high-value information in different regions of the feature map. Our method is assessed on two datasets: ISPRS Potsdam dataset and CCF dataset. The results indicate that our model with those strategies outperforms baseline models (DCED50) concerning F1, mean IoU and PA, 10.81%,19.11%, and 11.36% on the Potsdam dataset and 29.26%, 27.64% and 13.57% on the CCF dataset.
ISSN:2169-3536