Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Sun, Jiashuo"'
Large Vision-Language Models (LVLMs) have become pivotal at the intersection of computer vision and natural language processing. However, the full potential of LVLMs Retrieval-Augmented Generation (RAG) capabilities remains underutilized. Existing wo
Externí odkaz:
http://arxiv.org/abs/2409.14083
Recently, Large Vision-Language Models (LVLMs) have demonstrated impressive capabilities in multi-modal context comprehension. However, they still suffer from hallucination problems referring to generating inconsistent outputs with the image content.
Externí odkaz:
http://arxiv.org/abs/2408.17150
Autor:
Su, Zhaochen, Zhang, Jun, Qu, Xiaoye, Zhu, Tong, Li, Yanshu, Sun, Jiashuo, Li, Juntao, Zhang, Min, Cheng, Yu
Large language models (LLMs) have achieved impressive advancements across numerous disciplines, yet the critical issue of knowledge conflicts, a major source of hallucinations, has rarely been studied. Only a few research explored the conflicts betwe
Externí odkaz:
http://arxiv.org/abs/2408.12076
Autor:
Luo, Yi, Lin, Zhenghao, Zhang, Yuhao, Sun, Jiashuo, Lin, Chen, Xu, Chengjin, Su, Xiangdong, Shen, Yelong, Guo, Jian, Gong, Yeyun
Large Language Models (LLMs) exhibit impressive capabilities but also present risks such as biased content generation and privacy issues. One of the current alignment techniques includes principle-driven integration, but it faces challenges arising f
Externí odkaz:
http://arxiv.org/abs/2403.11838
Autor:
Sun, Jiashuo, Xu, Chengjin, Tang, Lumingyuan, Wang, Saizhuo, Lin, Chen, Gong, Yeyun, Ni, Lionel M., Shum, Heung-Yeung, Guo, Jian
Although large language models (LLMs) have achieved significant success in various tasks, they often struggle with hallucination problems, especially in scenarios requiring deep and responsible reasoning. These issues could be partially addressed by
Externí odkaz:
http://arxiv.org/abs/2307.07697
Large language models (LLMs) can achieve highly effective performance on various reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting as demonstrations. However, the reasoning chains of demonstrations generated by LLMs are p
Externí odkaz:
http://arxiv.org/abs/2304.11657
Long-form numerical reasoning in financial analysis aims to generate a reasoning program to calculate the correct answer for a given question. Previous work followed a retriever-generator framework, where the retriever selects key facts from a long-f
Externí odkaz:
http://arxiv.org/abs/2212.07249
Autor:
Sun, Jiashuo, Xiang, Linying
Publikováno v:
In Neurocomputing 21 January 2024 566
Autor:
Sun, Jiashuo, Xu, Chengjin, Tang, Lumingyuan, Wang, Saizhuo, Lin, Chen, Gong, Yeyun, Shum, Heung-Yeung, Guo, Jian
Large language models (LLMs) have made significant strides in various tasks, yet they often struggle with complex reasoning and exhibit poor performance in scenarios where knowledge traceability, timeliness, and accuracy are crucial. To address these
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::73dfe1b56a148686d23b76ecdf41dced
http://arxiv.org/abs/2307.07697
http://arxiv.org/abs/2307.07697
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.