Zobrazeno 1 - 10
of 530 194
pro vyhledávání: '"An, Kun"'
Autor:
Dias, Jorgivan Morais, Ji, Teng, Dong, Xiang-Kun, Guo, Feng-Kun, Hanhart, Christoph, Meißner, Ulf-G., Zhang, Yu, Zhang, Zhen-Hua
We analyze the latest LHCb data on the $\pi^+\pi^-$ spectrum in the isospin-violating $X(3872)~\to~J/\psi \pi^+\pi^-$ decay, employing a model-independent approach based on dispersion theory to deal with the $\pi\pi$ final state interactions. Additio
Externí odkaz:
http://arxiv.org/abs/2409.13245
This work proposes FireRedTTS, a foundation text-to-speech framework, to meet the growing demands for personalized and diverse generative speech applications. The framework comprises three parts: data processing, foundation system, and downstream app
Externí odkaz:
http://arxiv.org/abs/2409.03283
This paper aims to generate physically-layered 3D humans from text prompts. Existing methods either generate 3D clothed humans as a whole or support only tight and simple clothing generation, which limits their applications to virtual try-on and part
Externí odkaz:
http://arxiv.org/abs/2408.11357
In causal inference, encouragement designs (EDs) are widely used to analyze causal effects, when randomized controlled trials (RCTs) are impractical or compliance to treatment cannot be perfectly enforced. Unlike RCTs, which directly allocate treatme
Externí odkaz:
http://arxiv.org/abs/2408.05428
Autor:
Fang, Kun, Liu, Zi-Wen
Quantum resource distillation is a fundamental task in quantum information science. Minimizing the distillation overhead, i.e., the amount of noisy source states required to produce some desired output state within some target error, is crucial for t
Externí odkaz:
http://arxiv.org/abs/2410.14547
Autor:
Fang, Rongyao, Duan, Chengqi, Wang, Kun, Li, Hao, Tian, Hao, Zeng, Xingyu, Zhao, Rui, Dai, Jifeng, Li, Hongsheng, Liu, Xihui
Recent advancements in multimodal foundation models have yielded significant progress in vision-language understanding. Initial attempts have also explored the potential of multimodal large language models (MLLMs) for visual content generation. Howev
Externí odkaz:
http://arxiv.org/abs/2410.13861
Autor:
Zhang, Guibin, Dong, Haonan, Zhang, Yuchen, Li, Zhixun, Chen, Dingshuo, Wang, Kai, Chen, Tianlong, Liang, Yuxuan, Cheng, Dawei, Wang, Kun
Training high-quality deep models necessitates vast amounts of data, resulting in overwhelming computational and memory demands. Recently, data pruning, distillation, and coreset selection have been developed to streamline data volume by retaining, s
Externí odkaz:
http://arxiv.org/abs/2410.13761
Autor:
Zhou, Zhenhong, Yu, Haiyang, Zhang, Xinghua, Xu, Rongwu, Huang, Fei, Wang, Kun, Liu, Yang, Fang, Junfeng, Li, Yongbin
Large language models (LLMs) achieve state-of-the-art performance on multiple language tasks, yet their safety guardrails can be circumvented, leading to harmful generations. In light of this, recent research on safety mechanisms has emerged, reveali
Externí odkaz:
http://arxiv.org/abs/2410.13708
Autor:
Du, Yifan, Huo, Yuqi, Zhou, Kun, Zhao, Zijia, Lu, Haoyu, Huang, Han, Zhao, Wayne Xin, Wang, Bingning, Chen, Weipeng, Wen, Ji-Rong
Video Multimodal Large Language Models (MLLMs) have shown remarkable capability of understanding the video semantics on various downstream tasks. Despite the advancements, there is still a lack of systematic research on visual context representation,
Externí odkaz:
http://arxiv.org/abs/2410.13694
In this study, we investigate the Type-I Two-Higgs-Doublet Model (2HDM-I) as a potential explanation for the 95 GeV diphoton excess reported at the LHC and assess the feasibility of discovering a 95 GeV Higgs boson at future hadron colliders. With th
Externí odkaz:
http://arxiv.org/abs/2410.13636