Zobrazeno 1 - 10
of 26
pro vyhledávání: '"Tan, Zeqi"'
Autor:
Tang, Fei, Shen, Yongliang, Zhang, Hang, Tan, Zeqi, Zhang, Wenqi, Hou, Guiyang, Song, Kaitao, Lu, Weiming, Zhuang, Yueting
Large language model-based explainable recommendation (LLM-based ER) systems show promise in generating human-like explanations for recommendations. However, they face challenges in modeling user-item collaborative preferences, personalizing explanat
Externí odkaz:
http://arxiv.org/abs/2410.11841
In the social world, humans possess the capability to infer and reason about others mental states (such as emotions, beliefs, and intentions), known as the Theory of Mind (ToM). Simultaneously, humans own mental states evolve in response to social si
Externí odkaz:
http://arxiv.org/abs/2410.06195
Autor:
Zhang, Wenqi, Cheng, Zhenglin, He, Yuanyu, Wang, Mengna, Shen, Yongliang, Tan, Zeqi, Hou, Guiyang, He, Mingqian, Ma, Yanna, Lu, Weiming, Zhuang, Yueting
Although most current large multimodal models (LMMs) can already understand photos of natural scenes and portraits, their understanding of abstract images, e.g., charts, maps, or layouts, and visual reasoning capabilities remains quite rudimentary. T
Externí odkaz:
http://arxiv.org/abs/2407.07053
Large Language Models (LLMs) have demonstrated remarkable potential in handling complex reasoning tasks by generating step-by-step rationales.Some methods have proven effective in boosting accuracy by introducing extra verifiers to assess these paths
Externí odkaz:
http://arxiv.org/abs/2407.00390
Improving the reasoning capabilities of large language models (LLMs) has attracted considerable interest. Recent approaches primarily focus on improving the reasoning process to yield a more precise final answer. However, in scenarios involving conte
Externí odkaz:
http://arxiv.org/abs/2404.13985
Autor:
Zhang, Wenqi, Tang, Ke, Wu, Hai, Wang, Mengna, Shen, Yongliang, Hou, Guiyang, Tan, Zeqi, Li, Peng, Zhuang, Yueting, Lu, Weiming
Large Language Models (LLMs) exhibit robust problem-solving capabilities for diverse tasks. However, most LLM-based agents are designed as specific task solvers with sophisticated prompt engineering, rather than agents capable of learning and evolvin
Externí odkaz:
http://arxiv.org/abs/2402.17574
Generating mathematical equations from natural language requires an accurate understanding of the relations among math expressions. Existing approaches can be broadly categorized into token-level and expression-level generation. The former treats equ
Externí odkaz:
http://arxiv.org/abs/2310.09619
Distantly supervised named entity recognition (DS-NER) aims to locate entity mentions and classify their types with only knowledge bases or gazetteers and unlabeled corpus. However, distant annotations are noisy and degrade the performance of NER mod
Externí odkaz:
http://arxiv.org/abs/2310.08298
Autor:
Shen, Yongliang, Tan, Zeqi, Wu, Shuhui, Zhang, Wenqi, Zhang, Rongsheng, Xi, Yadong, Lu, Weiming, Zhuang, Yueting
Prompt learning is a new paradigm for utilizing pre-trained language models and has achieved great success in many tasks. To adopt prompt learning in the NER task, two kinds of methods have been explored from a pair of symmetric perspectives, populat
Externí odkaz:
http://arxiv.org/abs/2305.17104
Publikováno v:
ACL 2023
Relation extraction (RE) tasks show promising performance in extracting relations from two entities mentioned in sentences, given sufficient annotations available during training. Such annotations would be labor-intensive to obtain in practice. Exist
Externí odkaz:
http://arxiv.org/abs/2305.16663