Improving the Robustness of Neural Networks Using K-Support Norm Based Adversarial Training

It is of significant importance for any classification and recognition system, which claims near or better than human performance to be immune to small perturbations in the dataset. Researchers found out that neural networks are not very robust to small perturbations and can easily be fooled to pers...

Full description

Bibliographic Details
Main Authors: Sheikh Waqas Akhtar, Saad Rehman, Mahmood Akhtar, Muazzam A. Khan, Farhan Riaz, Qaiser Chaudry, Rupert Young
Format: Article
Language:English
Published: IEEE 2016-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/7795200/
Description
Summary:It is of significant importance for any classification and recognition system, which claims near or better than human performance to be immune to small perturbations in the dataset. Researchers found out that neural networks are not very robust to small perturbations and can easily be fooled to persistently misclassify by adding a particular class of noise in the test data. This, so-called adversarial noise severely deteriorates the performance of neural networks, which otherwise perform really well on unperturbed dataset. It has been recently proposed that neural networks can be made robust against adversarial noise by training them using the data corrupted with adversarial noise itself. Following this approach, in this paper, we propose a new mechanism to generate a powerful adversarial noise model based on K-support norm to train neural networks. We tested our approach on two benchmark datasets, namely the MNIST and STL-10, using muti-layer perceptron and convolutional neural networks. Experimental results demonstrate that neural networks trained with the proposed technique show significant improvement in robustness as compared to state-of-the-art techniques.
ISSN:2169-3536