Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Fu, Zhihang"'
Autor:
Lin, Zhengkai, Fu, Zhihang, Liu, Kai, Xie, Liang, Lin, Binbin, Wang, Wenxiao, Cai, Deng, Wu, Yue, Ye, Jieping
While large language models (LLMs) showcase unprecedented capabilities, they also exhibit certain inherent limitations when facing seemingly trivial tasks. A prime example is the recently debated "reversal curse", which surfaces when models, having b
Externí odkaz:
http://arxiv.org/abs/2410.18808
Autor:
Liu, Kai, Fu, Zhihang, Chen, Chao, Jin, Sheng, Chen, Ze, Tao, Mingyuan, Jiang, Rongxin, Ye, Jieping
The key to OOD detection has two aspects: generalized feature representation and precise category description. Recently, vision-language models such as CLIP provide significant advances in both two issues, but constructing precise category descriptio
Externí odkaz:
http://arxiv.org/abs/2407.16725
Autor:
Liu, Kai, Chen, Ze, Fu, Zhihang, Jiang, Rongxin, Zhou, Fan, Chen, Yaowu, Wu, Yue, Ye, Jieping
This paper introduces a pioneering methodology, termed StructTuning, to efficiently transform foundation Large Language Models (LLMs) into domain specialists. It significantly reduces the training corpus requirement to a mere 0.3%, while achieving an
Externí odkaz:
http://arxiv.org/abs/2407.16724
Autor:
Liu, Kai, Fu, Zhihang, Chen, Chao, Zhang, Wei, Jiang, Rongxin, Zhou, Fan, Chen, Yaowu, Wu, Yue, Ye, Jieping
When reading long-form text, human cognition is complex and structurized. While large language models (LLMs) process input contexts through a causal and sequential perspective, this approach can potentially limit their ability to handle intricate and
Externí odkaz:
http://arxiv.org/abs/2407.16434
Autor:
Liu, Kai, Fu, Zhihang, Jin, Sheng, Chen, Chao, Chen, Ze, Jiang, Rongxin, Zhou, Fan, Chen, Yaowu, Ye, Jieping
Detecting and rejecting unknown out-of-distribution (OOD) samples is critical for deployed neural networks to void unreliable predictions. In real-world scenarios, however, the efficacy of existing OOD detection methods is often impeded by the inhere
Externí odkaz:
http://arxiv.org/abs/2407.16430
Autor:
Liu, Kai, Fu, Zhihang, Jin, Sheng, Chen, Ze, Zhou, Fan, Jiang, Rongxin, Chen, Yaowu, Ye, Jieping
Enlarging input images is a straightforward and effective approach to promote small object detection. However, simple image enlargement is significantly expensive on both computations and GPU memory. In fact, small objects are usually sparsely distri
Externí odkaz:
http://arxiv.org/abs/2407.16424
Autor:
Liang, Xize, Chen, Chao, Qiu, Shuang, Wang, Jie, Wu, Yue, Fu, Zhihang, Shi, Zhihao, Wu, Feng, Ye, Jieping
Preference alignment is pivotal for empowering large language models (LLMs) to generate helpful and harmless responses. However, the performance of preference alignment is highly sensitive to the prevalent noise in the preference data. Recent efforts
Externí odkaz:
http://arxiv.org/abs/2404.04102
Knowledge hallucination have raised widespread concerns for the security and reliability of deployed LLMs. Previous efforts in detecting hallucinations have been employed at logit-level uncertainty estimation or language-level self-consistency evalua
Externí odkaz:
http://arxiv.org/abs/2402.03744
Publikováno v:
NeurIPS 2023
For a machine learning model deployed in real world scenarios, the ability of detecting out-of-distribution (OOD) samples is indispensable and challenging. Most existing OOD detection methods focused on exploring advanced training skills or training-
Externí odkaz:
http://arxiv.org/abs/2402.10062
Without manually annotated identities, unsupervised multi-object trackers are inferior to learning reliable feature embeddings. It causes the similarity-based inter-frame association stage also be error-prone, where an uncertainty problem arises. The
Externí odkaz:
http://arxiv.org/abs/2307.15409