Towards deep learning models resistant to adversarial attacks

© Learning Representations, ICLR 2018 - Conference Track Proceedings.All right reserved. Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To addres...

Full description

Bibliographic Details
Main Authors: Madry, A (Author), Makelov, A (Author), Schmidt, L (Author), Tsipras, D (Author), Vladu, A (Author)
Format: Article
Language:English
Published: 2021-11-05T14:56:05Z.
Subjects:
Online Access:Get fulltext