Zobrazeno 1 - 10
of 4 653
pro vyhledávání: '"Dimitrakopoulos A"'
Autor:
Dimitrakopoulos, Nikolaos
We briefly summarise the results for the four top-quark production process in the 4$\ell$ decay channel at NLO accuracy in perturbative QCD. We employ the narrow-width approximation for the treatment of the unstable particles, preserving spin correla
Externí odkaz:
http://arxiv.org/abs/2410.12629
We examine the effect of higher-order QCD corrections on the four top-quark production cross section in the $3$ lepton decay channel. Top-quark and $W$ gauge-boson decays are included at next-to-leading order in perturbative QCD. The narrow-width app
Externí odkaz:
http://arxiv.org/abs/2410.05960
The widespread adoption of machine learning algorithms necessitates hardware acceleration to ensure efficient performance. This acceleration relies on custom matrix engines that operate on full or reduced-precision floating-point arithmetic. However,
Externí odkaz:
http://arxiv.org/abs/2408.11997
Structured sparsity is an efficient way to prune the complexity of modern Machine Learning (ML) applications and to simplify the handling of sparse data in hardware. In such cases, the acceleration of structured-sparse ML models is handled by sparse
Externí odkaz:
http://arxiv.org/abs/2402.10850
Transformers have improved drastically the performance of natural language processing (NLP) and computer vision applications. The computation of transformers involves matrix multiplications and non-linear activation functions such as softmax and GELU
Externí odkaz:
http://arxiv.org/abs/2402.10118
Publikováno v:
JHEP06 (2024) 129
Triggered by the observation of four top-quark production at the LHC by the ATLAS and CMS collaboration we report on the calculation of the next-to-leading order QCD corrections to the Standard Model process $pp \to t\bar{t}t\bar{t}$ in the $4\ell$ t
Externí odkaz:
http://arxiv.org/abs/2401.10678
Structured sparsity has been proposed as an efficient way to prune the complexity of modern Machine Learning (ML) applications and to simplify the handling of sparse data in hardware. The acceleration of ML models - for both training and inference -
Externí odkaz:
http://arxiv.org/abs/2311.07241
Autor:
Marshall, C., Meisel, Z., Montes, F., Wagner, L., Hermansen, K., Garg, R., Chipps, K. A., Tsintari, P., Dimitrakopoulos, N., Berg, G. P. A., Brune, C., Couder, M., Greife, U., Schatz, H., Smith, M. S.
Absolute cross sections measured using electromagnetic devices to separate and detect heavy recoiling ions need to be corrected for charge state fractions. Accurate prediction of charge state distributions using theoretical models is not always a pos
Externí odkaz:
http://arxiv.org/abs/2309.02991
The widespread proliferation of deep learning applications has triggered the need to accelerate them directly in hardware. General Matrix Multiplication (GEMM) kernels are elemental deep-learning constructs and they inherently map onto Systolic Array
Externí odkaz:
http://arxiv.org/abs/2309.02969
Systolic Array (SA) architectures are well suited for accelerating matrix multiplications through the use of a pipelined array of Processing Elements (PEs) communicating with local connections and pre-orchestrated data movements. Even though most of
Externí odkaz:
http://arxiv.org/abs/2304.12691