AERO: A 1.28 MOP/s/LUT Reconfigurable Inference Processor for Recurrent Neural Networks in a Resource-Limited FPGA
This study presents a resource-efficient reconfigurable inference processor for recurrent neural networks (RNN), named AERO. AERO is programmable to perform inference on RNN models of various types. This was designed based on the instruction-set architecture specializing in processing primitive vect...
Main Authors: | Jinwon Kim, Jiho Kim, Tae-Hwan Kim |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-05-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/10/11/1249 |
Similar Items
-
FPGA-Based Implementation of Stochastic Configuration Networks for Regression Prediction
by: Yunqi Gao, et al.
Published: (2020-07-01) -
An Energy-Efficient Implementation of Group Pruned CNNs on FPGA
by: Wei Pang, et al.
Published: (2020-01-01) -
SoC Design Based on a FPGA for a Configurable Neural Network Trained by Means of an EKF
by: Juan Renteria-Cedano, et al.
Published: (2019-07-01) -
Automatic Tool for Fast Generation of Custom Convolutional Neural Networks Accelerators for FPGA
by: Miguel Rivera-Acosta, et al.
Published: (2019-06-01) -
Design of FPGA Accelerator Architecture for Convolutional Neural Network
by: LI Bingjian, QIN Guoxuan, ZHU Shaojie, PEI Zhihui
Published: (2020-03-01)