Semi-Supervised Deep Transfer Learning-Based on Adversarial Feature Learning for Label Limited SAR Target Recognition

The data-driven convolutional neural networks (CNNs) have achieved great progress in Synthetic Aperture Radar automatic target recognition (SAR-ATR) after being trained in a large scale of labeled samples. However, the insufficiency of labeled SAR data always leads to over-fitting, causing significa...

Full description

Bibliographic Details
Main Authors: Wei Zhang, Yongfeng Zhu, Qiang Fu
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8877822/
Description
Summary:The data-driven convolutional neural networks (CNNs) have achieved great progress in Synthetic Aperture Radar automatic target recognition (SAR-ATR) after being trained in a large scale of labeled samples. However, the insufficiency of labeled SAR data always leads to over-fitting, causing significant performance degradation. To solve the mentioned problem, a semi-supervised transfer learning method based on generative adversarial networks (GANs) is presented in the present paper. The discriminator of GAN with an encoder and a discriminative layer is redesigned to make it capable of learning the feature representation of input data with unsupervised settings. Instead of training a deep neural network with the insufficient labeled data set, we first train a GAN with varieties of unlabeled samples to learn generic features of SAR images. Subsequently, the learned parameters are readopted to initialize the target network to transfer the generic knowledge to specific SAR target recognition task. Lastly, the target network is fine-tuned using both the labeled and unlabeled training samples by a semi-supervised loss function. We evaluate the proposed method on the MSTAR and OpenSARShip data set with 80%, 60%, 40%, and 20% of the training set labeled, respectively. The results suggest that the proposed method achieves up to 23.58% accuracy enhancement over the random-initialized model.
ISSN:2169-3536