Managing the ATLAS Grid through Harvester

ATLAS Computing Management has identified the migration of all computing resources to Harvester, PanDA’s new workload submission engine, as a critical milestone for LHC Run 3 and 4. This contribution will focus on the Grid migration to Harvester. We have built a redundant architecture based on CERN...

Full description

Bibliographic Details
Main Authors: Barreiro Megino Fernando Harald, Alekseev Aleksandr, Berghaus Frank, Cameron David, De Kaushik, Filipcic Andrej, Glushkov Ivan, Lin FaHui, Maeno Tadashi, Magini Nicolò
Format: Article
Language:English
Published: EDP Sciences 2020-01-01
Series:EPJ Web of Conferences
Online Access:https://www.epj-conferences.org/articles/epjconf/pdf/2020/21/epjconf_chep2020_03010.pdf
id doaj-d4d016cb59cc40e0bc7ff3b4ddfd37e4
record_format Article
spelling doaj-d4d016cb59cc40e0bc7ff3b4ddfd37e42021-08-02T15:10:41ZengEDP SciencesEPJ Web of Conferences2100-014X2020-01-012450301010.1051/epjconf/202024503010epjconf_chep2020_03010Managing the ATLAS Grid through HarvesterBarreiro Megino Fernando Harald0Alekseev Aleksandr1Berghaus Frank2Cameron David3De Kaushik4Filipcic Andrej5Glushkov Ivan6Lin FaHui7Maeno Tadashi8Magini Nicolò9University of Texas at ArlingtonTomsk Polytechnic UniversityUniversity of VictoriaUniversity of OsloUniversity of Texas at ArlingtonJozef Stefan InstituteUniversity of Texas at ArlingtonUniversity of Texas at ArlingtonBrookhaven National LaboratoryIowa State UniversityATLAS Computing Management has identified the migration of all computing resources to Harvester, PanDA’s new workload submission engine, as a critical milestone for LHC Run 3 and 4. This contribution will focus on the Grid migration to Harvester. We have built a redundant architecture based on CERN IT’s common offerings (e.g. Openstack Virtual Machines and Database on Demand) to run the necessary Harvester and HTCondor services, capable of sustaining the load of O(1M) workers on the Grid per day. We have reviewed the ATLAS Grid region by region and moved as much possible away from blind worker submission, where multiple queues (e.g. single core, multi core, high memory) compete for resources on a site. Instead we have migrated towards more intelligent models that use information and priorities from the central PanDA workload management system and stream the right number of workers of each category to a unified queue while keeping late binding to the jobs. We will also describe our enhanced monitoring and analytics framework. Worker and job information is synchronized with minimal delays to a CERN IT provided ElasticSearch repository, where we can interact with dashboards to follow submission progress, discover site issues (e.g. broken Compute Elements) or spot empty workers. The result is a much more efficient usage of the Grid resources with smart, built-in monitoring of resources.https://www.epj-conferences.org/articles/epjconf/pdf/2020/21/epjconf_chep2020_03010.pdf
collection DOAJ
language English
format Article
sources DOAJ
author Barreiro Megino Fernando Harald
Alekseev Aleksandr
Berghaus Frank
Cameron David
De Kaushik
Filipcic Andrej
Glushkov Ivan
Lin FaHui
Maeno Tadashi
Magini Nicolò
spellingShingle Barreiro Megino Fernando Harald
Alekseev Aleksandr
Berghaus Frank
Cameron David
De Kaushik
Filipcic Andrej
Glushkov Ivan
Lin FaHui
Maeno Tadashi
Magini Nicolò
Managing the ATLAS Grid through Harvester
EPJ Web of Conferences
author_facet Barreiro Megino Fernando Harald
Alekseev Aleksandr
Berghaus Frank
Cameron David
De Kaushik
Filipcic Andrej
Glushkov Ivan
Lin FaHui
Maeno Tadashi
Magini Nicolò
author_sort Barreiro Megino Fernando Harald
title Managing the ATLAS Grid through Harvester
title_short Managing the ATLAS Grid through Harvester
title_full Managing the ATLAS Grid through Harvester
title_fullStr Managing the ATLAS Grid through Harvester
title_full_unstemmed Managing the ATLAS Grid through Harvester
title_sort managing the atlas grid through harvester
publisher EDP Sciences
series EPJ Web of Conferences
issn 2100-014X
publishDate 2020-01-01
description ATLAS Computing Management has identified the migration of all computing resources to Harvester, PanDA’s new workload submission engine, as a critical milestone for LHC Run 3 and 4. This contribution will focus on the Grid migration to Harvester. We have built a redundant architecture based on CERN IT’s common offerings (e.g. Openstack Virtual Machines and Database on Demand) to run the necessary Harvester and HTCondor services, capable of sustaining the load of O(1M) workers on the Grid per day. We have reviewed the ATLAS Grid region by region and moved as much possible away from blind worker submission, where multiple queues (e.g. single core, multi core, high memory) compete for resources on a site. Instead we have migrated towards more intelligent models that use information and priorities from the central PanDA workload management system and stream the right number of workers of each category to a unified queue while keeping late binding to the jobs. We will also describe our enhanced monitoring and analytics framework. Worker and job information is synchronized with minimal delays to a CERN IT provided ElasticSearch repository, where we can interact with dashboards to follow submission progress, discover site issues (e.g. broken Compute Elements) or spot empty workers. The result is a much more efficient usage of the Grid resources with smart, built-in monitoring of resources.
url https://www.epj-conferences.org/articles/epjconf/pdf/2020/21/epjconf_chep2020_03010.pdf
work_keys_str_mv AT barreiromeginofernandoharald managingtheatlasgridthroughharvester
AT alekseevaleksandr managingtheatlasgridthroughharvester
AT berghausfrank managingtheatlasgridthroughharvester
AT camerondavid managingtheatlasgridthroughharvester
AT dekaushik managingtheatlasgridthroughharvester
AT filipcicandrej managingtheatlasgridthroughharvester
AT glushkovivan managingtheatlasgridthroughharvester
AT linfahui managingtheatlasgridthroughharvester
AT maenotadashi managingtheatlasgridthroughharvester
AT magininicolo managingtheatlasgridthroughharvester
_version_ 1721230855207124992