File-based data flow in the CMS Filter Farm
Autor: | T. Bawej, Guillelmo Gomez-Ceballos, Dominique Gigi, S. Zaza, Attila Racz, Marc Dobson, Jan Veverka, Frans Meijers, Olivier Chaze, G. L. Darlea, Sergio Cittolin, J. G. Branson, Konstanty Sumorok, Petr Zejdl, Christoph Schwick, Ulf Behrens, Christian Deldicque, Hannes Sakulin, L. Masetti, Srecko Morovic, R. Jimenez-Estupiñán, Anastasios Andronidis, Jeroen Hegeman, J. M. Andre, Samim Erhan, Andrea Petrucci, C. Nunez-Barranco-Fernandez, André Holzner, A Dupont, Vivian O'Dell, Marco Pieri, Remigius K. Mommsen, Emilio Meschi, C. Paus, Benjamin Stieger, P. Roberts, Luciano Orsini, Frank Glege |
---|---|
Přispěvatelé: | Massachusetts Institute of Technology. Department of Physics, Massachusetts Institute of Technology. Laboratory for Nuclear Science, Darlea, G.-L., Gomez-Ceballos, Guillelmo, Paus, Christoph M. E., Veverka, Jan |
Rok vydání: | 2015 |
Předmět: |
History
Engineering Large Hadron Collider business.industry computer.software_genre JSON Networking hardware Computing and Computers Computer Science Applications Education Metadata Data flow diagram Software Data acquisition Robustness (computer science) Operating system business computer computer.programming_language |
Zdroj: | IOP Publishing |
ISSN: | 1742-6596 1742-6588 |
Popis: | During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small "documents" using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These "files" can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2. National Science Foundation (U.S.) United States. Department of Energy |
Databáze: | OpenAIRE |
Externí odkaz: |