Adversarially Robust Generalization Requires More Data
© 2018 Curran Associates Inc..All rights reserved. Machine learning models are often susceptible to adversarial perturbations of their inputs. Even small perturbations can cause state-of-the-art classifiers with high "standard" accuracy to produce an incorrect prediction with high confiden...
Main Authors: | Schmidt, Ludwig (Author), Santurkar, Shibani (Author), Tsipras, Dimitris (Author), Talwar, Kunal (Author), Madry, Aleksander (Author) |
---|---|
Format: | Article |
Language: | English |
Published: |
2021-11-08T18:36:03Z.
|
Subjects: | |
Online Access: | Get fulltext |
Similar Items
-
Robustness may be at odds with accuracy
by: Tsipras, Dimitris, et al.
Published: (2021) -
How does batch normalization help optimization?
by: Madry, Aleksander, et al.
Published: (2021) -
Image synthesis with a single (robust) classifier
by: Santurkar, Shibani, et al.
Published: (2021) -
A classification-based study of covariate shift in GAN distributions
by: Madry, Aleksander, et al.
Published: (2021) -
A classification-based study of covariate shift in GAN distributions
by: Madry, Aleksander, et al.
Published: (2022)