Bayesian Multi-objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neural Network Accelerator Design

In resource-constrained environments, such as low-power edge devices and smart sensors, deploying a fast, compact, and accurate intelligent system with minimum energy is indispensable. Embedding intelligence can be achieved using neural networks on neuromorphic hardware. Designing such networks woul...

Full description

Bibliographic Details
Main Authors: Maryam Parsa, John P. Mitchell, Catherine D. Schuman, Robert M. Patton, Thomas E. Potok, Kaushik Roy
Format: Article
Language:English
Published: Frontiers Media S.A. 2020-07-01
Series:Frontiers in Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/article/10.3389/fnins.2020.00667/full
id doaj-120e6ea5d9524e6a9b83226f0f3057af
record_format Article
spelling doaj-120e6ea5d9524e6a9b83226f0f3057af2020-11-25T03:11:11ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2020-07-011410.3389/fnins.2020.00667527249Bayesian Multi-objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neural Network Accelerator DesignMaryam Parsa0Maryam Parsa1John P. Mitchell2Catherine D. Schuman3Robert M. Patton4Thomas E. Potok5Kaushik Roy6Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United StatesComputational Data Analytics, Oak Ridge National Laboratory, Oak Ridge, IN, United StatesComputational Data Analytics, Oak Ridge National Laboratory, Oak Ridge, IN, United StatesComputational Data Analytics, Oak Ridge National Laboratory, Oak Ridge, IN, United StatesComputational Data Analytics, Oak Ridge National Laboratory, Oak Ridge, IN, United StatesComputational Data Analytics, Oak Ridge National Laboratory, Oak Ridge, IN, United StatesDepartment of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United StatesIn resource-constrained environments, such as low-power edge devices and smart sensors, deploying a fast, compact, and accurate intelligent system with minimum energy is indispensable. Embedding intelligence can be achieved using neural networks on neuromorphic hardware. Designing such networks would require determining several inherent hyperparameters. A key challenge is to find the optimum set of hyperparameters that might belong to the input/output encoding modules, the neural network itself, the application, or the underlying hardware. In this work, we present a hierarchical pseudo agent-based multi-objective Bayesian hyperparameter optimization framework (both software and hardware) that not only maximizes the performance of the network, but also minimizes the energy and area requirements of the corresponding neuromorphic hardware. We validate performance of our approach (in terms of accuracy and computation speed) on several control and classification applications on digital and mixed-signal (memristor-based) neural accelerators. We show that the optimum set of hyperparameters might drastically improve the performance of one application (i.e., 52–71% for Pole-Balance), while having minimum effect on another (i.e., 50–53% for RoboNav). In addition, we demonstrate resiliency of different input/output encoding, training neural network, or the underlying accelerator modules in a neuromorphic system to the changes of the hyperparameters.https://www.frontiersin.org/article/10.3389/fnins.2020.00667/fullmulti-objective hyperparameter optimizationBayesian optimizationneuromorphic computingspiking neural networksaccurate and energy-efficient machine learning
collection DOAJ
language English
format Article
sources DOAJ
author Maryam Parsa
Maryam Parsa
John P. Mitchell
Catherine D. Schuman
Robert M. Patton
Thomas E. Potok
Kaushik Roy
spellingShingle Maryam Parsa
Maryam Parsa
John P. Mitchell
Catherine D. Schuman
Robert M. Patton
Thomas E. Potok
Kaushik Roy
Bayesian Multi-objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neural Network Accelerator Design
Frontiers in Neuroscience
multi-objective hyperparameter optimization
Bayesian optimization
neuromorphic computing
spiking neural networks
accurate and energy-efficient machine learning
author_facet Maryam Parsa
Maryam Parsa
John P. Mitchell
Catherine D. Schuman
Robert M. Patton
Thomas E. Potok
Kaushik Roy
author_sort Maryam Parsa
title Bayesian Multi-objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neural Network Accelerator Design
title_short Bayesian Multi-objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neural Network Accelerator Design
title_full Bayesian Multi-objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neural Network Accelerator Design
title_fullStr Bayesian Multi-objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neural Network Accelerator Design
title_full_unstemmed Bayesian Multi-objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neural Network Accelerator Design
title_sort bayesian multi-objective hyperparameter optimization for accurate, fast, and efficient neural network accelerator design
publisher Frontiers Media S.A.
series Frontiers in Neuroscience
issn 1662-453X
publishDate 2020-07-01
description In resource-constrained environments, such as low-power edge devices and smart sensors, deploying a fast, compact, and accurate intelligent system with minimum energy is indispensable. Embedding intelligence can be achieved using neural networks on neuromorphic hardware. Designing such networks would require determining several inherent hyperparameters. A key challenge is to find the optimum set of hyperparameters that might belong to the input/output encoding modules, the neural network itself, the application, or the underlying hardware. In this work, we present a hierarchical pseudo agent-based multi-objective Bayesian hyperparameter optimization framework (both software and hardware) that not only maximizes the performance of the network, but also minimizes the energy and area requirements of the corresponding neuromorphic hardware. We validate performance of our approach (in terms of accuracy and computation speed) on several control and classification applications on digital and mixed-signal (memristor-based) neural accelerators. We show that the optimum set of hyperparameters might drastically improve the performance of one application (i.e., 52–71% for Pole-Balance), while having minimum effect on another (i.e., 50–53% for RoboNav). In addition, we demonstrate resiliency of different input/output encoding, training neural network, or the underlying accelerator modules in a neuromorphic system to the changes of the hyperparameters.
topic multi-objective hyperparameter optimization
Bayesian optimization
neuromorphic computing
spiking neural networks
accurate and energy-efficient machine learning
url https://www.frontiersin.org/article/10.3389/fnins.2020.00667/full
work_keys_str_mv AT maryamparsa bayesianmultiobjectivehyperparameteroptimizationforaccuratefastandefficientneuralnetworkacceleratordesign
AT maryamparsa bayesianmultiobjectivehyperparameteroptimizationforaccuratefastandefficientneuralnetworkacceleratordesign
AT johnpmitchell bayesianmultiobjectivehyperparameteroptimizationforaccuratefastandefficientneuralnetworkacceleratordesign
AT catherinedschuman bayesianmultiobjectivehyperparameteroptimizationforaccuratefastandefficientneuralnetworkacceleratordesign
AT robertmpatton bayesianmultiobjectivehyperparameteroptimizationforaccuratefastandefficientneuralnetworkacceleratordesign
AT thomasepotok bayesianmultiobjectivehyperparameteroptimizationforaccuratefastandefficientneuralnetworkacceleratordesign
AT kaushikroy bayesianmultiobjectivehyperparameteroptimizationforaccuratefastandefficientneuralnetworkacceleratordesign
_version_ 1724655664459939840