Zobrazeno 1 - 10
of 874
pro vyhledávání: '"ZHANG Yu-jie"'
We explore toponium, the smallest known quantum bound state of a top quark and its antiparticle, bound by the strong force. With a Bohr radius of $8 \times 10^{-18}$~m and a lifetime of $2.5 \times 10^{-25}$ s, toponium uniquely probes microphysics.
Externí odkaz:
http://arxiv.org/abs/2412.11254
Inspired by the newly reported $B\to D^*(\to D\pi)\ell\bar{\nu}_\ell$ differential decay rates by the Belle and Belle II Collaborations, we revisit the $V_{cb}$ puzzle in semi-leptonic $B\to D^*$ decays, considering the latest lattice QCD simulations
Externí odkaz:
http://arxiv.org/abs/2412.05989
We study a new class of MDPs that employs multinomial logit (MNL) function approximation to ensure valid probability distributions over the state space. Despite its benefits, introducing the non-linear function raises significant challenges in both c
Externí odkaz:
http://arxiv.org/abs/2405.17061
Complementary-label learning is a weakly supervised learning problem in which each training example is associated with one or multiple complementary labels indicating the classes to which it does not belong. Existing consistent approaches have relied
Externí odkaz:
http://arxiv.org/abs/2311.15502
Publikováno v:
Science Bulletin 2024; 69(10): 1386-1391
The bound state of a $\tau^+\tau^-$ pair by the electromagnetic force is the heaviest and smallest QED atom. Since the discovery of the two lightest QED atoms more than 60 years ago, no evidence for the third one has been found. We demonstrate that t
Externí odkaz:
http://arxiv.org/abs/2305.00171
Stochastically Extended Adversarial (SEA) model is introduced by Sachs et al. [2022] as an interpolation between stochastic and adversarial online convex optimization. Under the smoothness condition, they demonstrate that the expected regret of optim
Externí odkaz:
http://arxiv.org/abs/2302.04552
Dealing with distribution shifts is one of the central challenges for modern machine learning. One fundamental situation is the covariate shift, where the input distributions of data change from training to testing stages while the input-conditional
Externí odkaz:
http://arxiv.org/abs/2302.02552
The standard supervised learning paradigm works effectively when training data shares the same distribution as the upcoming testing samples. However, this stationary assumption is often violated in real-world applications, especially when testing dat
Externí odkaz:
http://arxiv.org/abs/2207.02121
Publikováno v:
In Colloids and Surfaces A: Physicochemical and Engineering Aspects 5 December 2024 702 Part 1
Publikováno v:
In Journal of Catalysis December 2024 440