GeantV

Autor: Amadio, G., Ananya, A., Apostolakis, J., Bandieramonte, M., Banerjee, S., Bhattacharyya, A., Bianchini, C., Bitzes, G., Canal, P., Carminati, F., Chaparro-Amaro, O., Cosmo, G., De Fine Licht, J. C., Drogan, V., Duhem, L., Elvira, D., Fuentes, J., Gheata, A., Gheata, M., Gravey, M., Goulas, I., Hariri, F., Jun, S. Y., Konstantinov, D., Kumawat, H., Lima, J. G., Maldonado-Romo, A., Martínez-Castro, J., Mato, P., Nikitina, T., Novaes, S., Novak, M., Pedro, K., Pokorski, W., Ribon, A., Schmitz, R., Seghal, R., Shadura, O., Tcherniaev, E., Vallecorsa, S., Wenzel, S., Zhang, Y.
Zdroj: Computing and Software for Big Science; December 2021, Vol. 5 Issue: 1
Abstrakt: Full detector simulation was among the largest CPU consumers in all CERN experiment software stacks for the first two runs of the Large Hadron Collider. In the early 2010s, it was projected that simulation demands would scale linearly with increasing luminosity, with only partial compensation from increasing computing resources. The extension of fast simulation approaches to cover more use cases that represent a larger fraction of the simulation budget is only part of the solution, because of intrinsic precision limitations. The remainder corresponds to speeding up the simulation software by several factors, which is not achievable by just applying simple optimizations to the current code base. In this context, the GeantV R&D project was launched, aiming to redesign the legacy particle transport code in order to benefit from features of fine-grained parallelism, including vectorization and increased locality of both instruction and data. This paper provides an extensive presentation of the results and achievements of this R&D project, as well as the conclusions and lessons learned from the beta version prototype.
Databáze: Supplemental Index