Unsupervised Feature Learning With Winner-Takes-All Based STDP

We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-tempor...

Full description

Bibliographic Details
Main Authors: Paul Ferré, Franck Mamalet, Simon J. Thorpe
Format: Article
Language:English
Published: Frontiers Media S.A. 2018-04-01
Series:Frontiers in Computational Neuroscience
Subjects:
Online Access:http://journal.frontiersin.org/article/10.3389/fncom.2018.00024/full
id doaj-b1281bb920994a5ba48be02a81eee5b7
record_format Article
spelling doaj-b1281bb920994a5ba48be02a81eee5b72020-11-24T22:29:56ZengFrontiers Media S.A.Frontiers in Computational Neuroscience1662-51882018-04-011210.3389/fncom.2018.00024281686Unsupervised Feature Learning With Winner-Takes-All Based STDPPaul Ferré0Paul Ferré1Franck Mamalet2Simon J. Thorpe3Centre National de la Recherche Scientifique, UMR-5549, Toulouse, FranceBrainchip SAS, Balma, FranceBrainchip SAS, Balma, FranceCentre National de la Recherche Scientifique, UMR-5549, Toulouse, FranceWe present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods.http://journal.frontiersin.org/article/10.3389/fncom.2018.00024/fullSpike-Timing-Dependent-Pasticityneural networkunsupervised learningwinner-takes-allvision
collection DOAJ
language English
format Article
sources DOAJ
author Paul Ferré
Paul Ferré
Franck Mamalet
Simon J. Thorpe
spellingShingle Paul Ferré
Paul Ferré
Franck Mamalet
Simon J. Thorpe
Unsupervised Feature Learning With Winner-Takes-All Based STDP
Frontiers in Computational Neuroscience
Spike-Timing-Dependent-Pasticity
neural network
unsupervised learning
winner-takes-all
vision
author_facet Paul Ferré
Paul Ferré
Franck Mamalet
Simon J. Thorpe
author_sort Paul Ferré
title Unsupervised Feature Learning With Winner-Takes-All Based STDP
title_short Unsupervised Feature Learning With Winner-Takes-All Based STDP
title_full Unsupervised Feature Learning With Winner-Takes-All Based STDP
title_fullStr Unsupervised Feature Learning With Winner-Takes-All Based STDP
title_full_unstemmed Unsupervised Feature Learning With Winner-Takes-All Based STDP
title_sort unsupervised feature learning with winner-takes-all based stdp
publisher Frontiers Media S.A.
series Frontiers in Computational Neuroscience
issn 1662-5188
publishDate 2018-04-01
description We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods.
topic Spike-Timing-Dependent-Pasticity
neural network
unsupervised learning
winner-takes-all
vision
url http://journal.frontiersin.org/article/10.3389/fncom.2018.00024/full
work_keys_str_mv AT paulferre unsupervisedfeaturelearningwithwinnertakesallbasedstdp
AT paulferre unsupervisedfeaturelearningwithwinnertakesallbasedstdp
AT franckmamalet unsupervisedfeaturelearningwithwinnertakesallbasedstdp
AT simonjthorpe unsupervisedfeaturelearningwithwinnertakesallbasedstdp
_version_ 1725742697539436544