Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations
This article concerns the expressive power of depth in neural nets with ReLU activations and a bounded width. We are particularly interested in the following questions: What is the minimal width <inline-formula> <math display="inline"> <semantics> <mrow> <msub>...
Main Author: | Boris Hanin |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2019-10-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/7/10/992 |
Similar Items
-
ReLU Network with Bounded Width Is a Universal Approximator in View of an Approximate Identity
by: Sunghwan Moon
Published: (2021-01-01) -
A new scheme for training ReLU-based multi-layer feedforward neural networks
by: Wang, Hao
Published: (2017) -
Quantum ReLU activation for Convolutional Neural Networks to improve diagnosis of Parkinson’s disease and COVID-19
by: Parisi, Luca, et al.
Published: (2021) -
PARAMETRIC FLATTEN-T SWISH: AN ADAPTIVE NONLINEAR ACTIVATION FUNCTION FOR DEEP LEARNING
by: Hock Hung Chieng, et al.
Published: (2020-11-01) -
Flatten-T Swish: a thresholded ReLU-Swish-like activation function for deep learning
by: Hock Hung Chieng, et al.
Published: (2018-07-01)