CEB Improves Model Robustness

Intuitively, one way to make classifiers more robust to their input is to have them depend less sensitively on their input. The Information Bottleneck (IB) tries to learn compressed representations of input that are still predictive. Scaling up IB approaches to large scale image classification tasks...

Full description

Bibliographic Details
Main Authors: Ian Fischer, Alexander A. Alemi
Format: Article
Language:English
Published: MDPI AG 2020-09-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/22/10/1081
id doaj-58571da5045b4d7e8d4e0d604ddb3f32
record_format Article
spelling doaj-58571da5045b4d7e8d4e0d604ddb3f322020-11-25T03:52:48ZengMDPI AGEntropy1099-43002020-09-01221081108110.3390/e22101081CEB Improves Model RobustnessIan Fischer0Alexander A. Alemi1Google Research, Mountain View, CA 94043, USAGoogle Research, Mountain View, CA 94043, USAIntuitively, one way to make classifiers more robust to their input is to have them depend less sensitively on their input. The Information Bottleneck (IB) tries to learn compressed representations of input that are still predictive. Scaling up IB approaches to large scale image classification tasks has proved difficult. We demonstrate that the Conditional Entropy Bottleneck (CEB) can not only scale up to large scale image classification tasks, but can additionally improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the ImageNet-C Common Corruptions Benchmark, ImageNet-A, and PGD attacks.https://www.mdpi.com/1099-4300/22/10/1081information theoryinformation bottleneckmachine learning
collection DOAJ
language English
format Article
sources DOAJ
author Ian Fischer
Alexander A. Alemi
spellingShingle Ian Fischer
Alexander A. Alemi
CEB Improves Model Robustness
Entropy
information theory
information bottleneck
machine learning
author_facet Ian Fischer
Alexander A. Alemi
author_sort Ian Fischer
title CEB Improves Model Robustness
title_short CEB Improves Model Robustness
title_full CEB Improves Model Robustness
title_fullStr CEB Improves Model Robustness
title_full_unstemmed CEB Improves Model Robustness
title_sort ceb improves model robustness
publisher MDPI AG
series Entropy
issn 1099-4300
publishDate 2020-09-01
description Intuitively, one way to make classifiers more robust to their input is to have them depend less sensitively on their input. The Information Bottleneck (IB) tries to learn compressed representations of input that are still predictive. Scaling up IB approaches to large scale image classification tasks has proved difficult. We demonstrate that the Conditional Entropy Bottleneck (CEB) can not only scale up to large scale image classification tasks, but can additionally improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the ImageNet-C Common Corruptions Benchmark, ImageNet-A, and PGD attacks.
topic information theory
information bottleneck
machine learning
url https://www.mdpi.com/1099-4300/22/10/1081
work_keys_str_mv AT ianfischer cebimprovesmodelrobustness
AT alexanderaalemi cebimprovesmodelrobustness
_version_ 1724480875764121600