Zobrazeno 1 - 10
of 410
pro vyhledávání: '"WANG Jiaan"'
Recently, O1-like models have emerged as representative examples, illustrating the effectiveness of long chain-of-thought (CoT) in reasoning tasks such as math and coding tasks. In this paper, we introduce DRT-o1, an attempt to bring the success of l
Externí odkaz:
http://arxiv.org/abs/2412.17498
Retrieval-augmented generation (RAG) introduces additional information to enhance large language models (LLMs). In machine translation (MT), previous work typically retrieves in-context examples from paired MT corpora, or domain-specific knowledge fr
Externí odkaz:
http://arxiv.org/abs/2412.04342
Emojis have gained immense popularity on social platforms, serving as a common means to supplement or replace text. However, existing data mining approaches generally either completely ignore or simply treat emojis as ordinary Unicode characters, whi
Externí odkaz:
http://arxiv.org/abs/2409.14552
Autor:
Zhao, Haiquan, Li, Lingyu, Chen, Shisong, Kong, Shuqi, Wang, Jiaan, Huang, Kexin, Gu, Tianle, Wang, Yixu, Jian, Wang, Liang, Dandan, Li, Zhixu, Teng, Yan, Xiao, Yanghua, Wang, Yingchun
Emotion Support Conversation (ESC) is a crucial application, which aims to reduce human stress, offer emotional guidance, and ultimately enhance human mental and physical well-being. With the advancement of Large Language Models (LLMs), many research
Externí odkaz:
http://arxiv.org/abs/2406.14952
Recently, Knowledge Editing has received increasing attention, since it could update the specific knowledge from outdated ones in pretrained models without re-training. However, as pointed out by recent studies, existing related methods tend to merel
Externí odkaz:
http://arxiv.org/abs/2406.02882
Text-to-Table aims to generate structured tables to convey the key information from unstructured documents. Existing text-to-table datasets are typically oriented English, limiting the research in non-English languages. Meanwhile, the emergence of la
Externí odkaz:
http://arxiv.org/abs/2405.12174
Knowledge-grounded dialogue (KGD) learns to generate an informative response based on a given dialogue context and external knowledge (\emph{e.g.}, knowledge graphs; KGs). Recently, the emergence of large language models (LLMs) and pre-training techn
Externí odkaz:
http://arxiv.org/abs/2401.04361
Multimodal knowledge bases (MMKBs) provide cross-modal aligned knowledge crucial for multimodal tasks. However, the images in existing MMKBs are generally collected for entities in encyclopedia knowledge graphs. Therefore, detailed groundings of visu
Externí odkaz:
http://arxiv.org/abs/2312.10417
Knowledge editing aims to change language models' performance on several special cases (i.e., editing scope) by infusing the corresponding expected knowledge into them. With the recent advancements in large language models (LLMs), knowledge editing h
Externí odkaz:
http://arxiv.org/abs/2309.08952
Multi-modal knowledge graphs (MMKGs) combine different modal data (e.g., text and image) for a comprehensive understanding of entities. Despite the recent progress of large-scale MMKGs, existing MMKGs neglect the multi-aspect nature of entities, limi
Externí odkaz:
http://arxiv.org/abs/2308.04992