Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets

Spiking neural networks (SNNs) present a promising computing model and enable bio-plausible information processing and event-driven based ultra-low power neuromorphic hardware. However, training SNNs to reach the same performances of conventional deep artificial neural networks (ANNs), particularly...

Full description

Bibliographic Details
Main Authors: Jeongjun Lee, Renqian Zhang, Wenrui Zhang, Yu Liu, Peng Li
Format: Article
Language:English
Published: Frontiers Media S.A. 2020-03-01
Series:Frontiers in Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/article/10.3389/fnins.2020.00143/full
id doaj-988e8c9c7cd241ce81cadd86b7b5fcb5
record_format Article
spelling doaj-988e8c9c7cd241ce81cadd86b7b5fcb52020-11-25T00:31:47ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2020-03-011410.3389/fnins.2020.00143496573Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural NetsJeongjun Lee0Renqian Zhang1Wenrui Zhang2Yu Liu3Peng Li4Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United StatesDepartment of Electrical and Computer Engineering, Texas A&M University, College Station, TX, United StatesDepartment of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United StatesDepartment of Electrical and Computer Engineering, Texas A&M University, College Station, TX, United StatesDepartment of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United StatesSpiking neural networks (SNNs) present a promising computing model and enable bio-plausible information processing and event-driven based ultra-low power neuromorphic hardware. However, training SNNs to reach the same performances of conventional deep artificial neural networks (ANNs), particularly with error backpropagation (BP) algorithms, poses a significant challenge due to inherent complex dynamics and non-differentiable spike activities of spiking neurons. In this paper, we present the first study on realizing competitive spike-train level backpropagation (BP) like algorithms to enable on-chip training of SNNs. We propose a novel spike-train level direct feedback alignment (ST-DFA) algorithm, which is much more bio-plausible and hardware friendly than BP. Algorithm and hardware co-optimization and efficient online neural signal computation are explored for on-chip implementation of ST-DFA. On the Xilinx ZC706 FPGA board, the proposed hardware-efficient ST-DFA shows excellent performance vs. overhead tradeoffs for real-world speech and image classification applications. SNN neural processors with on-chip ST-DFA training show competitive classification accuracy of 96.27% for the MNIST dataset with 4× input resolution reduction and 84.88% for the challenging 16-speaker TI46 speech corpus, respectively. Compared to the hardware implementation of the state-of-the-art BP algorithm HM2-BP, the design of the proposed ST-DFA reduces functional resources by 76.7% and backward training latency by 31.6% while gracefully trading off classification performance.https://www.frontiersin.org/article/10.3389/fnins.2020.00143/fullspiking neural networksbackpropagationon-chip traininghardware neural processorFPGA
collection DOAJ
language English
format Article
sources DOAJ
author Jeongjun Lee
Renqian Zhang
Wenrui Zhang
Yu Liu
Peng Li
spellingShingle Jeongjun Lee
Renqian Zhang
Wenrui Zhang
Yu Liu
Peng Li
Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
Frontiers in Neuroscience
spiking neural networks
backpropagation
on-chip training
hardware neural processor
FPGA
author_facet Jeongjun Lee
Renqian Zhang
Wenrui Zhang
Yu Liu
Peng Li
author_sort Jeongjun Lee
title Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title_short Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title_full Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title_fullStr Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title_full_unstemmed Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title_sort spike-train level direct feedback alignment: sidestepping backpropagation for on-chip training of spiking neural nets
publisher Frontiers Media S.A.
series Frontiers in Neuroscience
issn 1662-453X
publishDate 2020-03-01
description Spiking neural networks (SNNs) present a promising computing model and enable bio-plausible information processing and event-driven based ultra-low power neuromorphic hardware. However, training SNNs to reach the same performances of conventional deep artificial neural networks (ANNs), particularly with error backpropagation (BP) algorithms, poses a significant challenge due to inherent complex dynamics and non-differentiable spike activities of spiking neurons. In this paper, we present the first study on realizing competitive spike-train level backpropagation (BP) like algorithms to enable on-chip training of SNNs. We propose a novel spike-train level direct feedback alignment (ST-DFA) algorithm, which is much more bio-plausible and hardware friendly than BP. Algorithm and hardware co-optimization and efficient online neural signal computation are explored for on-chip implementation of ST-DFA. On the Xilinx ZC706 FPGA board, the proposed hardware-efficient ST-DFA shows excellent performance vs. overhead tradeoffs for real-world speech and image classification applications. SNN neural processors with on-chip ST-DFA training show competitive classification accuracy of 96.27% for the MNIST dataset with 4× input resolution reduction and 84.88% for the challenging 16-speaker TI46 speech corpus, respectively. Compared to the hardware implementation of the state-of-the-art BP algorithm HM2-BP, the design of the proposed ST-DFA reduces functional resources by 76.7% and backward training latency by 31.6% while gracefully trading off classification performance.
topic spiking neural networks
backpropagation
on-chip training
hardware neural processor
FPGA
url https://www.frontiersin.org/article/10.3389/fnins.2020.00143/full
work_keys_str_mv AT jeongjunlee spiketrainleveldirectfeedbackalignmentsidesteppingbackpropagationforonchiptrainingofspikingneuralnets
AT renqianzhang spiketrainleveldirectfeedbackalignmentsidesteppingbackpropagationforonchiptrainingofspikingneuralnets
AT wenruizhang spiketrainleveldirectfeedbackalignmentsidesteppingbackpropagationforonchiptrainingofspikingneuralnets
AT yuliu spiketrainleveldirectfeedbackalignmentsidesteppingbackpropagationforonchiptrainingofspikingneuralnets
AT pengli spiketrainleveldirectfeedbackalignmentsidesteppingbackpropagationforonchiptrainingofspikingneuralnets
_version_ 1725322287310176256