CEB Improves Model Robustness
Intuitively, one way to make classifiers more robust to their input is to have them depend less sensitively on their input. The Information Bottleneck (IB) tries to learn compressed representations of input that are still predictive. Scaling up IB approaches to large scale image classification tasks...
Main Authors: | Ian Fischer, Alexander A. Alemi |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-09-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/22/10/1081 |
Similar Items
-
The Conditional Entropy Bottleneck
by: Ian Fischer
Published: (2020-09-01) -
Genetic Algorithms to Maximize the Relevant Mutual Information in Communication Receivers
by: Jan Lewandowsky, et al.
Published: (2021-06-01) -
A Comparison of Variational Bounds for the Information Bottleneck Functional
by: Bernhard C. Geiger, et al.
Published: (2020-10-01) -
Aggregated Learning: An Information Theoretic Framework to Learning with Neural Networks
by: Soflaei Shahrbabak, Masoumeh
Published: (2020) -
Embo: a Python package for empirical data analysis using the Information Bottleneck
by: Eugenio Piasini, et al.
Published: (2021-05-01)