Online data handling and storage at the CMS experiment
Autor: | Samim Erhan, Petr Zejdl, Benjamin Stieger, Marc Dobson, Olivier Chaze, Anastasios Andronidis, Andrea Petrucci, Marco Pieri, A Dupont, Sergio Cittolin, L. Masetti, Emilio Meschi, C. Paus, C. Nunez-Barranco-Fernandez, Christian Deldicque, S. Zaza, Guillelmo Gomez-Ceballos, R. Jimenez-Estupiñán, Jeroen Hegeman, Luciano Orsini, Vivian O'Dell, Dominique Gigi, Jan Veverka, Zeynep Demiragli, André Holzner, Christoph Schwick, Frans Meijers, Ulf Behrens, Srecko Morovic, G. L. Darlea, J. M. Andre, Frank Glege, Attila Racz, Konstanty Sumorok, P. Roberts, J. G. Branson, Remigius K. Mommsen, Hannes Sakulin |
---|---|
Přispěvatelé: | Massachusetts Institute of Technology. Laboratory for Nuclear Science, Darlea, G.-L., Demiragli, Zeynep, Gomez-Ceballos, Guillelmo, Paus, Christoph M. E., Sumorok, Konstanty C, Veverka, Jan |
Rok vydání: | 2015 |
Předmět: |
History
Engineering business.industry Group method of data handling USable JSON Networking hardware Computer Science Applications Education Metadata Software Data acquisition Detectors and Experimental Techniques Distributed File System business computer Computer hardware computer.programming_language |
Zdroj: | IOP Publishing |
ISSN: | 1742-6596 1742-6588 |
Popis: | During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system. United States. Department of Energy National Science Foundation (U.S.) |
Databáze: | OpenAIRE |
Externí odkaz: |