Deep vs. shallow networks: An approximation theory perspective

© 2016 World Scientific Publishing Company. The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than s...

Full description

Bibliographic Details
Main Authors: Mhaskar, HN (Author), Poggio, T (Author)
Format: Article
Language:English
Published: World Scientific Pub Co Pte Lt, 2021-10-27T20:06:08Z.
Subjects:
Online Access:Get fulltext
LEADER 01267 am a22001693u 4500
001 134674
042 |a dc 
100 1 0 |a Mhaskar, HN  |e author 
700 1 0 |a Poggio, T  |e author 
245 0 0 |a Deep vs. shallow networks: An approximation theory perspective 
260 |b World Scientific Pub Co Pte Lt,   |c 2021-10-27T20:06:08Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/134674 
520 |a © 2016 World Scientific Publishing Company. The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function - the ReLU function - used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning. 
546 |a en 
655 7 |a Article 
773 |t 10.1142/S0219530516400042 
773 |t Analysis and Applications