AxCEM: Designing Approximate Comparator-Enabled Multipliers
Floating-point multipliers have been the key component of nearly all forms of modern computing systems. Most data-intensive applications, such as deep neural networks (DNNs), expend the majority of their resources and energy budget for floating-point multiplication. The error-resilient nature of the...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-03-01
|
Series: | Journal of Low Power Electronics and Applications |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9268/10/1/9 |
id |
doaj-c4b76a34184b4c4cab895ad8f5c0e585 |
---|---|
record_format |
Article |
spelling |
doaj-c4b76a34184b4c4cab895ad8f5c0e5852020-11-25T01:40:49ZengMDPI AGJournal of Low Power Electronics and Applications2079-92682020-03-01101910.3390/jlpea10010009jlpea10010009AxCEM: Designing Approximate Comparator-Enabled MultipliersSamar Ghabraei0Morteza Rezaalipour1Masoud Dehyadegari2Mahdi Nazm Bojnordi3Faculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, Mirdamad Blvd No. 470, IranFaculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, Mirdamad Blvd No. 470, IranFaculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, Mirdamad Blvd No. 470, IranSchool of Computing, University of Utah, Salt Lake City, UT 84112, USAFloating-point multipliers have been the key component of nearly all forms of modern computing systems. Most data-intensive applications, such as deep neural networks (DNNs), expend the majority of their resources and energy budget for floating-point multiplication. The error-resilient nature of these applications often suggests employing approximate computing to improve the energy-efficiency, performance, and area of floating-point multipliers. Prior work has shown that employing hardware-oriented approximation for computing the mantissa product may result in significant system energy reduction at the cost of an acceptable computational error. This article examines the design of an approximate comparator used for preforming mantissa products in the floating-point multipliers. First, we illustrate the use of exact comparators for enhancing power, area, and delay of floating-point multipliers. Then, we explore the design space of approximate comparators for designing efficient approximate comparator-enabled multipliers (AxCEM). Our simulation results indicate that the proposed architecture can achieve a 66% reduction in power dissipation, another 66% reduction in die-area, and a 71% decrease in delay. As compared with the state-of-the-art approximate floating-point multipliers, the accuracy loss in DNN applications due to the proposed AxCEM is less than 0.06%.https://www.mdpi.com/2079-9268/10/1/9floating-point multiplicationdeep neural networksapproximate computing |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Samar Ghabraei Morteza Rezaalipour Masoud Dehyadegari Mahdi Nazm Bojnordi |
spellingShingle |
Samar Ghabraei Morteza Rezaalipour Masoud Dehyadegari Mahdi Nazm Bojnordi AxCEM: Designing Approximate Comparator-Enabled Multipliers Journal of Low Power Electronics and Applications floating-point multiplication deep neural networks approximate computing |
author_facet |
Samar Ghabraei Morteza Rezaalipour Masoud Dehyadegari Mahdi Nazm Bojnordi |
author_sort |
Samar Ghabraei |
title |
AxCEM: Designing Approximate Comparator-Enabled Multipliers |
title_short |
AxCEM: Designing Approximate Comparator-Enabled Multipliers |
title_full |
AxCEM: Designing Approximate Comparator-Enabled Multipliers |
title_fullStr |
AxCEM: Designing Approximate Comparator-Enabled Multipliers |
title_full_unstemmed |
AxCEM: Designing Approximate Comparator-Enabled Multipliers |
title_sort |
axcem: designing approximate comparator-enabled multipliers |
publisher |
MDPI AG |
series |
Journal of Low Power Electronics and Applications |
issn |
2079-9268 |
publishDate |
2020-03-01 |
description |
Floating-point multipliers have been the key component of nearly all forms of modern computing systems. Most data-intensive applications, such as deep neural networks (DNNs), expend the majority of their resources and energy budget for floating-point multiplication. The error-resilient nature of these applications often suggests employing approximate computing to improve the energy-efficiency, performance, and area of floating-point multipliers. Prior work has shown that employing hardware-oriented approximation for computing the mantissa product may result in significant system energy reduction at the cost of an acceptable computational error. This article examines the design of an approximate comparator used for preforming mantissa products in the floating-point multipliers. First, we illustrate the use of exact comparators for enhancing power, area, and delay of floating-point multipliers. Then, we explore the design space of approximate comparators for designing efficient approximate comparator-enabled multipliers (AxCEM). Our simulation results indicate that the proposed architecture can achieve a 66% reduction in power dissipation, another 66% reduction in die-area, and a 71% decrease in delay. As compared with the state-of-the-art approximate floating-point multipliers, the accuracy loss in DNN applications due to the proposed AxCEM is less than 0.06%. |
topic |
floating-point multiplication deep neural networks approximate computing |
url |
https://www.mdpi.com/2079-9268/10/1/9 |
work_keys_str_mv |
AT samarghabraei axcemdesigningapproximatecomparatorenabledmultipliers AT mortezarezaalipour axcemdesigningapproximatecomparatorenabledmultipliers AT masouddehyadegari axcemdesigningapproximatecomparatorenabledmultipliers AT mahdinazmbojnordi axcemdesigningapproximatecomparatorenabledmultipliers |
_version_ |
1725043356155772928 |