ResNet with one-neuron hidden layers is a Universal Approximator

We demonstrate that a very deep ResNet with stacked modules that have one neuron per hidden layer and ReLU activation functions can uniformly approximate any Lebesgue integrable function in d dimensions, i.e. ℓ1(Rd). Due to the identity mapping inherent to ResNets, our network has alternating layers...

Full description

Bibliographic Details
Main Authors: Lin, Hongzhou (Author), Jegelka, Stefanie Sabrina (Author)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Morgan Kaufmann Publishers, 2021-01-07T14:35:57Z.
Subjects:
Online Access:Get fulltext
LEADER 01444 am a22001933u 4500
001 129326
042 |a dc 
100 1 0 |a Lin, Hongzhou  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
700 1 0 |a Jegelka, Stefanie Sabrina  |e author 
245 0 0 |a ResNet with one-neuron hidden layers is a Universal Approximator 
260 |b Morgan Kaufmann Publishers,   |c 2021-01-07T14:35:57Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/129326 
520 |a We demonstrate that a very deep ResNet with stacked modules that have one neuron per hidden layer and ReLU activation functions can uniformly approximate any Lebesgue integrable function in d dimensions, i.e. ℓ1(Rd). Due to the identity mapping inherent to ResNets, our network has alternating layers of dimension one and d. This stands in sharp contrast to fully connected networks, which are not universal approximators if their width is the input dimension d [21, 11]. Hence, our result implies an increase in representational power for narrow deep networks by the ResNet architecture. 
520 |a United States. Defense Advanced Research Projects Agency (Grant number YFA17N66001-17-1-4039) 
546 |a en 
655 7 |a Article 
773 |t Advances in Neural Information Processing Systems