Convolutional Rank Filters in Deep Learning

Deep neural nets mainly rely on convolutions to generate feature maps and transposed convolutions to create images. Rank filters are already critical components of neural nets under the disguise of max-pooling, rank-pooling, and max-Unpooling layers. We propose a framework that generalizes them, and...

Full description

Bibliographic Details
Main Author: Blanchette, Jonathan
Other Authors: Laganière, Robert
Format: Others
Language:en
Published: 2020
Online Access:http://hdl.handle.net/10393/41120
http://dx.doi.org/10.20381/ruor-25344
Description
Summary:Deep neural nets mainly rely on convolutions to generate feature maps and transposed convolutions to create images. Rank filters are already critical components of neural nets under the disguise of max-pooling, rank-pooling, and max-Unpooling layers. We propose a framework that generalizes them, and we apply the novel layers successfully in convolution and deconvolution while combining them with linear convolutional feature maps. We call this class of layers rank filters. We explore the robustness, training, and testing performance under different types of noise. We provide analysis for their proper weight initialization, and we explore different architectures to discover where and when the rank filters could be advantageous. We also designed transposed versions of the non-linear filter that doesn’t generate artifacts. We propose the use of stochastic algorithms to sample sparse random real weights using the Gumbel max-trick. We compare the novel architectures with the baseline