Model of shared ATLAS Tier2 and Tier3 facilities in EGI/gLite Grid flavour

Autor: Gonzalez de la Hoz, S, Villaplana, M, Kemp, Y, Gasthuber, M, Wolters, H, Benjamin, D, Pardo, J, Pacheco, A, Sanchez, J, Espinal, X, Severini, H, Bhimji, W, Levinson, L, Van Der Ster, D, Campana, S
Jazyk: angličtina
Rok vydání: 2012
Předmět:
Popis: The ATLAS computing and data models have moved/are moving away from the strict MONARC model (hierarchy) to a mesh model. Evolution of computing models also requires evolution of network infrastructure to enable any Tier2 and Tier3 to easily connect to any Tier1 or Tier2. In this way some changing of the data model are required: a) any site can replicate data from any other site. b) Dynamic data caching. Analysis sites receive datasets from any other site “on demand” based on usage pattern, and possibly using a dynamic placement of datasets by centrally managed replication of whole datasets. Unused data is removed. c) Remote data access. Local jobs could access data stored at remote sites using local caching on a file or sub-file level. In this contribution, the model of shared ATLAS Tier2 and Tier3 facilities in the EGI/gLite flavour is explained. The Tier3s in the US and the Tier3s in Europe are rather different because in Europe we have facilities, which are Tier2s with a Tier3 component (Tier3 with a co-located Tier2). Data taking in ATLAS has been going on for more than one year. The Tier2 and Tier3 facility setup, how do we get the data, how do we enable at the same time grid and local data access, how Tier2 and Tier3 activities affect the cluster differently and process of hundreds of million of events, will be presented. Finally, an example of how a real physics analysis is working at these sites will be shown, and this is a good occasion to see if we have developed all the Grid tools necessary for the ATLAS Distributed Computing community, and in case we do not, to try to fix it, in order to be ready for the foreseen increase in ATLAS activity in the next years.
Databáze: OpenAIRE