Convolutional Rank Filters in Deep Learning
Deep neural nets mainly rely on convolutions to generate feature maps and transposed convolutions to create images. Rank filters are already critical components of neural nets under the disguise of max-pooling, rank-pooling, and max-Unpooling layers. We propose a framework that generalizes them, and...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Language: | en |
Published: |
2020
|
Online Access: | http://hdl.handle.net/10393/41120 http://dx.doi.org/10.20381/ruor-25344 |
id |
ndltd-uottawa.ca-oai-ruor.uottawa.ca-10393-41120 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-uottawa.ca-oai-ruor.uottawa.ca-10393-411202020-09-30T05:48:01Z Convolutional Rank Filters in Deep Learning Blanchette, Jonathan Laganière, Robert Deep neural nets mainly rely on convolutions to generate feature maps and transposed convolutions to create images. Rank filters are already critical components of neural nets under the disguise of max-pooling, rank-pooling, and max-Unpooling layers. We propose a framework that generalizes them, and we apply the novel layers successfully in convolution and deconvolution while combining them with linear convolutional feature maps. We call this class of layers rank filters. We explore the robustness, training, and testing performance under different types of noise. We provide analysis for their proper weight initialization, and we explore different architectures to discover where and when the rank filters could be advantageous. We also designed transposed versions of the non-linear filter that doesn’t generate artifacts. We propose the use of stochastic algorithms to sample sparse random real weights using the Gumbel max-trick. We compare the novel architectures with the baseline 2020-09-28T22:17:25Z 2020-09-28T22:17:25Z 2020-09-28 http://hdl.handle.net/10393/41120 http://dx.doi.org/10.20381/ruor-25344 en application/pdf |
collection |
NDLTD |
language |
en |
format |
Others
|
sources |
NDLTD |
description |
Deep neural nets mainly rely on convolutions to generate feature maps and transposed convolutions to create images. Rank filters are already critical components of neural nets under the disguise of max-pooling, rank-pooling, and max-Unpooling layers. We propose a framework that generalizes them, and we apply the novel layers successfully in convolution and deconvolution while combining them with linear convolutional feature maps. We call this class of layers rank filters. We explore the robustness, training, and testing performance under different types of noise. We provide analysis for their proper weight initialization, and we explore different architectures to discover where and when the rank filters could be advantageous. We also designed transposed versions of the non-linear filter that doesn’t generate artifacts. We propose the use of stochastic algorithms to sample sparse random real weights using the Gumbel max-trick. We compare the novel architectures with the baseline |
author2 |
Laganière, Robert |
author_facet |
Laganière, Robert Blanchette, Jonathan |
author |
Blanchette, Jonathan |
spellingShingle |
Blanchette, Jonathan Convolutional Rank Filters in Deep Learning |
author_sort |
Blanchette, Jonathan |
title |
Convolutional Rank Filters in Deep Learning |
title_short |
Convolutional Rank Filters in Deep Learning |
title_full |
Convolutional Rank Filters in Deep Learning |
title_fullStr |
Convolutional Rank Filters in Deep Learning |
title_full_unstemmed |
Convolutional Rank Filters in Deep Learning |
title_sort |
convolutional rank filters in deep learning |
publishDate |
2020 |
url |
http://hdl.handle.net/10393/41120 http://dx.doi.org/10.20381/ruor-25344 |
work_keys_str_mv |
AT blanchettejonathan convolutionalrankfiltersindeeplearning |
_version_ |
1719347116714754048 |