Performance data of multiple-precision scalar and vector BLAS operations on CPU and GPU

Many optimized linear algebra packages support the single- and double-precision floating-point data types. However, there are a number of important applications that require a higher level of precision, up to hundreds or even thousands of digits. This article presents performance data of four dense...

Full description

Bibliographic Details
Main Author: Konstantin Isupov
Format: Article
Language:English
Published: Elsevier 2020-06-01
Series:Data in Brief
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2352340920304005
id doaj-b70120bc02744ea4b57410394a61e7b1
record_format Article
spelling doaj-b70120bc02744ea4b57410394a61e7b12020-11-25T03:12:07ZengElsevierData in Brief2352-34092020-06-0130105506Performance data of multiple-precision scalar and vector BLAS operations on CPU and GPUKonstantin Isupov0Corresponding author.; Department of Electronic Computing Machines, Vyatka State University, Russian FederationMany optimized linear algebra packages support the single- and double-precision floating-point data types. However, there are a number of important applications that require a higher level of precision, up to hundreds or even thousands of digits. This article presents performance data of four dense basic linear algebra subprograms – ASUM, DOT, SCAL, and AXPY – implemented using existing extended-/multiple-precision software for conventional central processing units and CUDA compatible graphics processing units. The following open source packages are considered: MPFR, MPDECIMAL, ARPREC, MPACK, XBLAS, GARPREC, CAMPARY, CUMP, and MPRES-BLAS. The execution time of CPU and GPU implementations is measured at a fixed problem size and various levels of numeric precision. The data in this article are related to the research article entitled “Design and implementation of multiple-precision BLAS Level 1 functions for graphics processing units” [1].http://www.sciencedirect.com/science/article/pii/S2352340920304005Multiple-precision arithmeticFloating-point computationsGraphics processing unitsCUDABLAS
collection DOAJ
language English
format Article
sources DOAJ
author Konstantin Isupov
spellingShingle Konstantin Isupov
Performance data of multiple-precision scalar and vector BLAS operations on CPU and GPU
Data in Brief
Multiple-precision arithmetic
Floating-point computations
Graphics processing units
CUDA
BLAS
author_facet Konstantin Isupov
author_sort Konstantin Isupov
title Performance data of multiple-precision scalar and vector BLAS operations on CPU and GPU
title_short Performance data of multiple-precision scalar and vector BLAS operations on CPU and GPU
title_full Performance data of multiple-precision scalar and vector BLAS operations on CPU and GPU
title_fullStr Performance data of multiple-precision scalar and vector BLAS operations on CPU and GPU
title_full_unstemmed Performance data of multiple-precision scalar and vector BLAS operations on CPU and GPU
title_sort performance data of multiple-precision scalar and vector blas operations on cpu and gpu
publisher Elsevier
series Data in Brief
issn 2352-3409
publishDate 2020-06-01
description Many optimized linear algebra packages support the single- and double-precision floating-point data types. However, there are a number of important applications that require a higher level of precision, up to hundreds or even thousands of digits. This article presents performance data of four dense basic linear algebra subprograms – ASUM, DOT, SCAL, and AXPY – implemented using existing extended-/multiple-precision software for conventional central processing units and CUDA compatible graphics processing units. The following open source packages are considered: MPFR, MPDECIMAL, ARPREC, MPACK, XBLAS, GARPREC, CAMPARY, CUMP, and MPRES-BLAS. The execution time of CPU and GPU implementations is measured at a fixed problem size and various levels of numeric precision. The data in this article are related to the research article entitled “Design and implementation of multiple-precision BLAS Level 1 functions for graphics processing units” [1].
topic Multiple-precision arithmetic
Floating-point computations
Graphics processing units
CUDA
BLAS
url http://www.sciencedirect.com/science/article/pii/S2352340920304005
work_keys_str_mv AT konstantinisupov performancedataofmultipleprecisionscalarandvectorblasoperationsoncpuandgpu
_version_ 1724651484971270144