PermLSTM: A High Energy-Efficiency LSTM Accelerator Architecture

Pruning and quantization are two commonly used approaches to accelerate the LSTM (Long Short-Term Memory) model. However, the traditional linear quantization usually suffers from the problem of gradient vanishing, and the existing pruning methods all have the problem of producing undesired irregular...

Full description

Bibliographic Details
Main Authors: Yong Zheng, Haigang Yang, Yiping Jia, Zhihong Huang
Format: Article
Language:English
Published: MDPI AG 2021-04-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/10/8/882