GPU coprocessors as a service for deep learning inference in high energy physics

<jats:title>Abstract</jats:title> <jats:p>In the next decade, the demands for computing in large scientific experiments are expected to grow tremendously. During the same time period, CPU performance increases will be limited. At the CERN Large Hadron Collider (LHC), these two issu...

Full description

Bibliographic Details
Main Authors: Krupa, Jeffrey (Author), Lin, Kelvin (Author), Acosta Flechas, Maria (Author), Dinsmore, Jack (Author), Duarte, Javier (Author), Harris, Philip (Author), Hauck, Scott (Author), Holzman, Burt (Author), Hsu, Shih-Chieh (Author), Klijnsma, Thomas (Author), Liu, Mia (Author), Pedro, Kevin (Author), Rankin, Dylan (Author), Suaysom, Natchanon (Author), Trahms, Matt (Author), Tran, Nhan (Author)
Format: Article
Language:English
Published: IOP Publishing, 2022-04-26T18:25:15Z.
Subjects:
Online Access:Get fulltext
LEADER 02074 am a22003373u 4500
001 142112
042 |a dc 
100 1 0 |a Krupa, Jeffrey  |e author 
700 1 0 |a Lin, Kelvin  |e author 
700 1 0 |a Acosta Flechas, Maria  |e author 
700 1 0 |a Dinsmore, Jack  |e author 
700 1 0 |a Duarte, Javier  |e author 
700 1 0 |a Harris, Philip  |e author 
700 1 0 |a Hauck, Scott  |e author 
700 1 0 |a Holzman, Burt  |e author 
700 1 0 |a Hsu, Shih-Chieh  |e author 
700 1 0 |a Klijnsma, Thomas  |e author 
700 1 0 |a Liu, Mia  |e author 
700 1 0 |a Pedro, Kevin  |e author 
700 1 0 |a Rankin, Dylan  |e author 
700 1 0 |a Suaysom, Natchanon  |e author 
700 1 0 |a Trahms, Matt  |e author 
700 1 0 |a Tran, Nhan  |e author 
245 0 0 |a GPU coprocessors as a service for deep learning inference in high energy physics 
260 |b IOP Publishing,   |c 2022-04-26T18:25:15Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/142112 
520 |a <jats:title>Abstract</jats:title> <jats:p>In the next decade, the demands for computing in large scientific experiments are expected to grow tremendously. During the same time period, CPU performance increases will be limited. At the CERN Large Hadron Collider (LHC), these two issues will confront one another as the collider is upgraded for high luminosity running. Alternative processors such as graphics processing units (GPUs) can resolve this confrontation provided that algorithms can be sufficiently accelerated. In many cases, algorithmic speedups are found to be largest through the adoption of deep learning algorithms. We present a comprehensive exploration of the use of GPU-based hardware acceleration for deep learning inference within the data reconstruction workflow of high energy physics. We present several realistic examples and discuss a strategy for the seamless integration of coprocessors so that the LHC can maintain, if not exceed, its current performance throughout its running.</jats:p> 
546 |a en 
655 7 |a Article 
773 |t 10.1088/2632-2153/ABEC21 
773 |t Machine Learning: Science and Technology