Towards deep learning models resistant to adversarial attacks
© Learning Representations, ICLR 2018 - Conference Track Proceedings.All right reserved. Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To addres...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
2021-11-05T14:56:05Z.
|
Subjects: | |
Online Access: | Get fulltext |