Rotation-Invariant Feature Learning for Object Detection in VHR Optical Remote Sensing Images by Double-Net

Rotation-invariant feature extraction is crucial for object detection in Very High Resolution (VHR) optical remote sensing images. Although convolutional neural networks (CNNs) are good at extracting the translation-invariant features and have been widely applied in computer vision, it is still a ch...

Full description

Bibliographic Details
Main Authors: Zhi Zhang, Ruoqiao Jiang, Shaohui Mei, Shun Zhang, Yifan Zhang
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8936929/
Description
Summary:Rotation-invariant feature extraction is crucial for object detection in Very High Resolution (VHR) optical remote sensing images. Although convolutional neural networks (CNNs) are good at extracting the translation-invariant features and have been widely applied in computer vision, it is still a challenging problem for CNNs to extract rotation-invariant features in VHR optical remote sensing images. In this paper we present a novel Double-Net with sample pairs from the same class as inputs to improve the performance of object detection and classification in VHR optical remote sensing images. Specifically, the proposed Double-Net contains multiple channels of CNNs in which each channel refers to a specific rotation direction and all CNNs share identical weights. Based on the output features of all channels, multiple instance learning algorithm is employed to extract the final rotation-invariant features. Experimental results on two publicly available benchmark datasets, namely Mnist-rot-12K and NWPU VHR-10, demonstrate that the presented Double-Net outperforms existing approaches on the performance of rotation-invariant feature extraction, which is especially effective under the situation of small training samples.
ISSN:2169-3536