Performance of the new DAQ system of the CMS experiment for run-2
Autor: | Marc Dobson, Cristian Contescu, Dainius Simelevicius, Emilio Meschi, Jonathan Richard Fulcher, Samim Erhan, Hannes Sakulin, Jean-Marc Andre, Marco Pieri, Christian Deldicque, Remigius K. Mommsen, Petr Zejdl, Christoph Paus, Dominique Gigi, James G Branson, André Holzner, Luciano Orsini, Georgiana-Lavinia Darlea, Ulf Behrens, Anastasios Andronidis, Srecko Morovic, Olivier Chaze, Lucia Masetti, Benjamin Gordon Craigs, Jeroen Hegeman, Raul Jimenez-Estupianan, Zeynep Demiragli, Philipp Brummer, Attila Racz, Guillelmo Gomez-Ceballos, Thomas Reis, Frank Glege, Frans Meijers, Vivian O'Dell, Sergio Cittolin, Christoph Schwick |
---|---|
Rok vydání: | 2016 |
Předmět: |
Ethernet
Engineering Large Hadron Collider 010308 nuclear & particles physics business.industry InfiniBand computer.software_genre 01 natural sciences 030218 nuclear medicine & medical imaging 03 medical and health sciences 0302 clinical medicine Data acquisition Upgrade 0103 physical sciences Operating system Network File System business computer Throughput (business) Global file system |
Zdroj: | 2016 IEEE-NPSS Real Time Conference (RT). |
Popis: | The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of more than 100GB/s to the Highlevel Trigger (HLT) farm. The HLT farm selects and classifies interesting events for storage and offline analysis at an output rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013–2014. The motivation for this upgrade was twofold. Firstly, the compute nodes, networking and storage infrastructure were reaching the end of their lifetimes. Secondly, in order to maintain physics performance with higher LHC luminosities and increasing event pileup, a number of sub-detectors are being upgraded, increasing the number of readout channels as well as the required throughput, and replacing the off-detector readout electronics with a MicroTCA-based DAQ interface. The new DAQ architecture takes advantage of the latest developments in the computing industry. For data concentration 10/40 Gbit/s Ethernet technologies are used, and a 56Gbit/s Infiniband FDR CLOS network (total throughput « 4Tbit/s) has been chosen for the event builder. The upgraded DAQ — HLT interface is entirely file-based, essentially decoupling the DAQ and HLT systems. The fully-built events are transported to the HLT over 10/40 Gbit/s Ethernet via a network file system. The collection of events accepted by the HLT and the corresponding metadata are buffered on a global file system before being transferred off-site. The monitoring of the HLT farm and the data-taking performance is based on the Elasticsearch analytics tool. This paper presents the requirements, implementation, and performance of the system. Experience is reported on the first year of operation with LHC proton-proton runs as well as with the heavy ion lead-lead runs in 2015. |
Databáze: | OpenAIRE |
Externí odkaz: |