Hyperspectral Remote Sensing Images Deep Feature Extraction Based on Mixed Feature and Convolutional Neural Networks

To achieve effective deep fusion features for improving the classification accuracy of hyperspectral remote sensing images (HRSIs), a pixel frequency spectrum feature is presented and introduced to convolutional neural networks (CNNs). Firstly, the fast Fourier transform is performed on each spectra...

Full description

Bibliographic Details
Main Authors: Jing Liu, Zhe Yang, Yi Liu, Caihong Mu
Format: Article
Language:English
Published: MDPI AG 2021-07-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/13/13/2599
Description
Summary:To achieve effective deep fusion features for improving the classification accuracy of hyperspectral remote sensing images (HRSIs), a pixel frequency spectrum feature is presented and introduced to convolutional neural networks (CNNs). Firstly, the fast Fourier transform is performed on each spectral pixel to obtain the amplitude spectrum, i.e., the pixel frequency spectrum feature. Then, the obtained pixel frequency spectrum is combined with the spectral pixel to form a mixed feature, i.e., spectral and frequency spectrum mixed feature (SFMF). Several multi-branch CNNs fed with pixel frequency spectrum, SFMF, spectral pixel, and spatial features are designed for extracting deep fusion features. A pre-learning strategy, i.e., basic single branch CNNs are used to pre-learn the weights of a multi-branch CNN, is also presented for improving the network convergence speed and avoiding the network from getting into a locally optimal solution to a certain extent. And after reducing the dimensionality of SFMF by principal component analysis (PCA), a 3-dimensionality (3-D) CNN is also designed to further extract the joint spatial-SFMF feature. The experimental results of three real HRSIs show that adding the presented frequency spectrum feature into CNNs can achieve better recognition results, which in turn proves that the presented multi-branch CNNs can obtain the deep fusion features with more discriminant information.
ISSN:2072-4292