Zobrazeno 1 - 10
of 26 247
pro vyhledávání: '"A, Dimitrakopoulos"'
Autor:
Dimitrakopoulos, Nikolaos
We discuss the results for the four-top quark production process at the LHC at NLO accuracy in perturbative QCD for the $3\ell$ decay channel. The QCD corrections are applied in both the production and the decay stages of the four top quarks by emplo
Externí odkaz:
http://arxiv.org/abs/2412.03984
Autor:
Tsintari, P., Dimitrakopoulos, N., Garg, R., Hermansen, K., Marshall, C., Montes, F., Perdikakis, G., Schatz, H., Setoodehnia, K., Arora, H., Berg, G. P. A., Bhandari, R., Blackmon, J. C., Brune, C. R., Chipps, K. A., Couder, M., Deibel, C., Hood, A., Gamage, M. Horana, Jain, R., Maher, C., Miskovitch, S., Pereira, J., Ruland, T., Smith, M. S., Smith, M., Sultana, I., Tinson, C., Tsantiri, A., Villari, A., Wagner, L., Zegers, R. G. T.
The synthesis of heavy elements in supernovae is affected by low-energy (n,p) and (p,n) reactions on unstable nuclei, yet experimental data on such reaction rates are scarce. The SECAR (SEparator for CApture Reactions) recoil separator at FRIB (Facil
Externí odkaz:
http://arxiv.org/abs/2411.03338
Multi-term floating-point addition appears in vector dot-product computations, matrix multiplications, and other forms of floating-point data aggregation. A critical step in multi-term floating point addition is the alignment of fractions of the floa
Externí odkaz:
http://arxiv.org/abs/2410.21959
We examine the effect of higher-order QCD corrections on the four top-quark production cross section in the $3$ lepton decay channel. Top-quark and $W$ gauge-boson decays are included at next-to-leading order in perturbative QCD. The narrow-width app
Externí odkaz:
http://arxiv.org/abs/2410.05960
The widespread adoption of machine learning algorithms necessitates hardware acceleration to ensure efficient performance. This acceleration relies on custom matrix engines that operate on full or reduced-precision floating-point arithmetic. However,
Externí odkaz:
http://arxiv.org/abs/2408.11997
Structured sparsity is an efficient way to prune the complexity of modern Machine Learning (ML) applications and to simplify the handling of sparse data in hardware. In such cases, the acceleration of structured-sparse ML models is handled by sparse
Externí odkaz:
http://arxiv.org/abs/2402.10850
Transformers have improved drastically the performance of natural language processing (NLP) and computer vision applications. The computation of transformers involves matrix multiplications and non-linear activation functions such as softmax and GELU
Externí odkaz:
http://arxiv.org/abs/2402.10118
Publikováno v:
JHEP06 (2024) 129
Triggered by the observation of four top-quark production at the LHC by the ATLAS and CMS collaboration we report on the calculation of the next-to-leading order QCD corrections to the Standard Model process $pp \to t\bar{t}t\bar{t}$ in the $4\ell$ t
Externí odkaz:
http://arxiv.org/abs/2401.10678
Publikováno v:
Journal of Enterprising Communities: People and Places in the Global Economy, 2024, Vol. 18, Issue 5, pp. 1023-1044.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/JEC-08-2023-0144
Structured sparsity has been proposed as an efficient way to prune the complexity of modern Machine Learning (ML) applications and to simplify the handling of sparse data in hardware. The acceleration of ML models - for both training and inference -
Externí odkaz:
http://arxiv.org/abs/2311.07241