Scalable monitoring data processing for the LHCb software trigger

The LHCb High Level Trigger (HLT) is split in two stages. HLT1 is synchronous with collisions delivered by the LHC and writes its output to a local disk buffer, which is asynchronously processed by HLT2. Efficient monitoring of the data being processed by the application is crucial to promptly diagn...

Full description

Bibliographic Details
Main Authors: Petrucci Stefano, Matev Rosen, Aaij Roel
Format: Article
Language:English
Published: EDP Sciences 2020-01-01
Series:EPJ Web of Conferences
Online Access:https://www.epj-conferences.org/articles/epjconf/pdf/2020/21/epjconf_chep2020_01039.pdf
id doaj-bfaa67208a0d482da1c05c2859f2f5fe
record_format Article
spelling doaj-bfaa67208a0d482da1c05c2859f2f5fe2021-08-02T15:11:29ZengEDP SciencesEPJ Web of Conferences2100-014X2020-01-012450103910.1051/epjconf/202024501039epjconf_chep2020_01039Scalable monitoring data processing for the LHCb software triggerPetrucci Stefano0Matev Rosen1Aaij Roel2University of EdinburghCERNNikhefThe LHCb High Level Trigger (HLT) is split in two stages. HLT1 is synchronous with collisions delivered by the LHC and writes its output to a local disk buffer, which is asynchronously processed by HLT2. Efficient monitoring of the data being processed by the application is crucial to promptly diagnose detector or software problems. HLT2 consists of approximately 50000 processes and 4000 histograms are produced by each process. This results in 200 million histograms that need to be aggregated for each of up to a hundred data taking intervals that are being processed simultaneously. This paper presents the multi-level hierarchical architecture of the monitoring infrastructure put in place to achieve this. Network bandwidth is minimised by sending histogram increments and only exchanging metadata when necessary, using a custom lightweight protocol based on boost::serialize. The transport layer is implemented with ZeroMQ, which supports IPC and TCP communication, queue handling, asynchronous request/response and multipart messages. The persistent storage to ROOT is parallelized in order to cope with data arriving from a hundred of data taking intervals being processed simultaneously by HLT2. The performance and the scalability of the current system are presented. We demonstrate the feasibility of such an approach for the HLT1 use case, where real-time feedback and reliability of the infrastructure are crucial. In addition, a prototype of a high-level transport layer based on the stream-processing platform Apache Kafka is shown, which has several advantages over the lower-level ZeroMQ solution.https://www.epj-conferences.org/articles/epjconf/pdf/2020/21/epjconf_chep2020_01039.pdf
collection DOAJ
language English
format Article
sources DOAJ
author Petrucci Stefano
Matev Rosen
Aaij Roel
spellingShingle Petrucci Stefano
Matev Rosen
Aaij Roel
Scalable monitoring data processing for the LHCb software trigger
EPJ Web of Conferences
author_facet Petrucci Stefano
Matev Rosen
Aaij Roel
author_sort Petrucci Stefano
title Scalable monitoring data processing for the LHCb software trigger
title_short Scalable monitoring data processing for the LHCb software trigger
title_full Scalable monitoring data processing for the LHCb software trigger
title_fullStr Scalable monitoring data processing for the LHCb software trigger
title_full_unstemmed Scalable monitoring data processing for the LHCb software trigger
title_sort scalable monitoring data processing for the lhcb software trigger
publisher EDP Sciences
series EPJ Web of Conferences
issn 2100-014X
publishDate 2020-01-01
description The LHCb High Level Trigger (HLT) is split in two stages. HLT1 is synchronous with collisions delivered by the LHC and writes its output to a local disk buffer, which is asynchronously processed by HLT2. Efficient monitoring of the data being processed by the application is crucial to promptly diagnose detector or software problems. HLT2 consists of approximately 50000 processes and 4000 histograms are produced by each process. This results in 200 million histograms that need to be aggregated for each of up to a hundred data taking intervals that are being processed simultaneously. This paper presents the multi-level hierarchical architecture of the monitoring infrastructure put in place to achieve this. Network bandwidth is minimised by sending histogram increments and only exchanging metadata when necessary, using a custom lightweight protocol based on boost::serialize. The transport layer is implemented with ZeroMQ, which supports IPC and TCP communication, queue handling, asynchronous request/response and multipart messages. The persistent storage to ROOT is parallelized in order to cope with data arriving from a hundred of data taking intervals being processed simultaneously by HLT2. The performance and the scalability of the current system are presented. We demonstrate the feasibility of such an approach for the HLT1 use case, where real-time feedback and reliability of the infrastructure are crucial. In addition, a prototype of a high-level transport layer based on the stream-processing platform Apache Kafka is shown, which has several advantages over the lower-level ZeroMQ solution.
url https://www.epj-conferences.org/articles/epjconf/pdf/2020/21/epjconf_chep2020_01039.pdf
work_keys_str_mv AT petruccistefano scalablemonitoringdataprocessingforthelhcbsoftwaretrigger
AT matevrosen scalablemonitoringdataprocessingforthelhcbsoftwaretrigger
AT aaijroel scalablemonitoringdataprocessingforthelhcbsoftwaretrigger
_version_ 1721230843514454016