Opportunistic usage of the CMS online cluster using a cloud overlay

Autor: Attila Racz, Samim Erhan, Christoph Paus, Nicolas Doualot, André Holzner, Sergio Cittolin, Olivier Chaze, Dominique Gigi, Remigius K. Mommsen, Philipp Brummer, Jonathan Richard Fulcher, Christian Deldicque, Petr Zejdl, Hannes Sakulin, James G Branson, Jean-Marc Andre, Marco Pieri, Anastasios Andronidis, Lucia Masetti, Zeynep Demiragli, Cristian Contescu, Frans Meijers, Luciano Orsini, Emilio Meschi, Christoph Schwick, Frank Glege, Ulf Behrens, Dainius Simelevicius, Benjamin Gordon Craigs, Vivian O'Dell, Srecko Morovic, R. Jimenez-Estupiñán, Jeroen Hegeman, Marc Dobson, Guillelmo Gomez-Ceballos, Thomas Reis, Georgiana-Lavinia Darlea
Rok vydání: 2017
Předmět:
Zdroj: Proceedings of International Symposium on Grids and Clouds (ISGC) 2016 — PoS(ISGC 2016).
Popis: After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to the grid when it isn’t used to its full capacity for data acquisition. This occurs during the maintenance periods when the LHC is non-operational, which corresponded to 117 days in 2015. During 2016, the aim is to increase the availability of the CMS Online cluster for data processing by making the cluster accessible during the time between two physics collisions while the LHC and beams are being prepared. This is usually the case for a few hours every day, which would vastly increase the computing power available for data processing. Work has already been undertaken to provide this functionality, as an OpenStack cloud layer has been deployed as a minimal overlay that leaves the primary role of the cluster untouched. This overlay also abstracts the different hardware and networks that the cluster is composed of. The operation of the cloud (starting and stopping the virtual machines) is another challenge that has been overcome as the cluster has only a few hours spare during the aforementioned beam preparation. By improving the virtual image deployment and integrating the OpenStack services with the core services of the Data Acquisition on the CMS Online cluster it is now possible to start a thousand virtual machines within 10 minutes and to turn them off within seconds. This document will explain the architectural choices that were made to reach a fully redundant and scalable cloud, with a minimal impact on the running cluster configuration while giving a maximal segregation between the services. It will also present how to cold start 1000 virtual machines 25 times faster, using tools commonly utilised in all data centres.
Databáze: OpenAIRE