ResNet with one-neuron hidden layers is a Universal Approximator
We demonstrate that a very deep ResNet with stacked modules that have one neuron per hidden layer and ReLU activation functions can uniformly approximate any Lebesgue integrable function in d dimensions, i.e. ℓ1(Rd). Due to the identity mapping inherent to ResNets, our network has alternating layers...
Main Authors: | Lin, Hongzhou (Author), Jegelka, Stefanie Sabrina (Author) |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor) |
Format: | Article |
Language: | English |
Published: |
Morgan Kaufmann Publishers,
2021-01-07T14:35:57Z.
|
Subjects: | |
Online Access: | Get fulltext |
Similar Items
-
Produktmatchning EfficientNet vs. ResNet : En jämförelse
by: Malmgren, Emil, et al.
Published: (2021) -
Protein Contact Map Prediction Based on ResNet and DenseNet
by: Zhong Li, et al.
Published: (2020-01-01) -
Tire Bubble Defects Detection Using ResNet
by: WANG, FU-CHING, et al.
Published: (2019) -
An Improved ResNet Based on the Adjustable Shortcut Connections
by: Baoqi Li, et al.
Published: (2018-01-01) -
Are deep ResNets provably better than linear predictors?
Published: (2021)