Zobrazeno 1 - 10
of 26 598
pro vyhledávání: '"wang, kun"'
Many important phenomena in scientific fields such as climate, neuroscience, and epidemiology are naturally represented as spatiotemporal gridded data with complex interactions. For example, in climate science, researchers aim to uncover how large-sc
Externí odkaz:
http://arxiv.org/abs/2411.05331
Autor:
Fu, Cong, Wang, Kun, Wu, Jiahua, Chen, Yizhou, Huzhang, Guangda, Ni, Yabo, Zeng, Anxiang, Zhou, Zhiming
Modern e-commerce platforms rely heavily on modeling diverse user feedback to provide personalized services. Consequently, multi-task learning has become an integral part of their ranking systems. However, existing multi-task learning methods encount
Externí odkaz:
http://arxiv.org/abs/2411.09705
Autor:
Yu, Miao, Wang, Shilong, Zhang, Guibin, Mao, Junyuan, Yin, Chenlong, Liu, Qijiong, Wen, Qingsong, Wang, Kun, Wang, Yang
Large language models (LLMs) have empowered nodes within multi-agent networks with intelligence, showing growing applications in both academia and industry. However, how to prevent these networks from generating malicious information remains unexplor
Externí odkaz:
http://arxiv.org/abs/2410.15686
In this paper, we introduce DCDepth, a novel framework for the long-standing monocular depth estimation task. Moving beyond conventional pixel-wise depth estimation in the spatial domain, our approach estimates the frequency coefficients of depth pat
Externí odkaz:
http://arxiv.org/abs/2410.14980
Autor:
Fang, Rongyao, Duan, Chengqi, Wang, Kun, Li, Hao, Tian, Hao, Zeng, Xingyu, Zhao, Rui, Dai, Jifeng, Li, Hongsheng, Liu, Xihui
Recent advancements in multimodal foundation models have yielded significant progress in vision-language understanding. Initial attempts have also explored the potential of multimodal large language models (MLLMs) for visual content generation. Howev
Externí odkaz:
http://arxiv.org/abs/2410.13861
Autor:
Zhang, Guibin, Dong, Haonan, Zhang, Yuchen, Li, Zhixun, Chen, Dingshuo, Wang, Kai, Chen, Tianlong, Liang, Yuxuan, Cheng, Dawei, Wang, Kun
Training high-quality deep models necessitates vast amounts of data, resulting in overwhelming computational and memory demands. Recently, data pruning, distillation, and coreset selection have been developed to streamline data volume by retaining, s
Externí odkaz:
http://arxiv.org/abs/2410.13761
Autor:
Zhou, Zhenhong, Yu, Haiyang, Zhang, Xinghua, Xu, Rongwu, Huang, Fei, Wang, Kun, Liu, Yang, Fang, Junfeng, Li, Yongbin
Large language models (LLMs) achieve state-of-the-art performance on multiple language tasks, yet their safety guardrails can be circumvented, leading to harmful generations. In light of this, recent research on safety mechanisms has emerged, reveali
Externí odkaz:
http://arxiv.org/abs/2410.13708
In this study, we investigate the Type-I Two-Higgs-Doublet Model (2HDM-I) as a potential explanation for the 95 GeV diphoton excess reported at the LHC and assess the feasibility of discovering a 95 GeV Higgs boson at future hadron colliders. With th
Externí odkaz:
http://arxiv.org/abs/2410.13636
Autor:
Chen, Nelson, Wang, Kun, Johnson III, William R., Kramer-Bottiglio, Rebecca, Bekris, Kostas, Aanjaneya, Mridul
Tensegrity robots are composed of rigid struts and flexible cables. They constitute an emerging class of hybrid rigid-soft robotic systems and are promising systems for a wide array of applications, ranging from locomotion to assembly. They are diffi
Externí odkaz:
http://arxiv.org/abs/2410.12216
Autor:
Zhang, Guibin, Yue, Yanwei, Sun, Xiangguo, Wan, Guancheng, Yu, Miao, Fang, Junfeng, Wang, Kun, Cheng, Dawei
Recent advancements in large language model (LLM)-based agents have demonstrated that collective intelligence can significantly surpass the capabilities of individual agents, primarily due to well-crafted inter-agent communication topologies. Despite
Externí odkaz:
http://arxiv.org/abs/2410.11782