Zobrazeno 1 - 10
of 520
pro vyhledávání: '"Wang, Wenya"'
Fact-checking pipelines increasingly adopt the Decompose-Then-Verify paradigm, where texts are broken down into smaller claims for individual verification and subsequently combined for a veracity decision. While decomposition is widely-adopted in suc
Externí odkaz:
http://arxiv.org/abs/2411.02400
The exploration of language skills in language models (LMs) has always been one of the central goals in mechanistic interpretability. However, existing circuit analyses often fall short in representing the full functional scope of these models, prima
Externí odkaz:
http://arxiv.org/abs/2410.01334
In-context learning (ICL) has proven to be a significant capability with the advancement of Large Language models (LLMs). By instructing LLMs using few-shot demonstrative examples, ICL enables them to perform a wide range of tasks without needing to
Externí odkaz:
http://arxiv.org/abs/2408.07505
Large Language Models (LLMs) have demonstrated impressive in-context learning (ICL) capabilities from few-shot demonstration exemplars. While recent learning-based demonstration selection methods have proven beneficial to ICL by choosing more useful
Externí odkaz:
http://arxiv.org/abs/2406.11890
Autor:
Wang, Wenya, Pan, Sinno Jialin
Publikováno v:
Computational Linguistics, Vol 45, Iss 4, Pp 705-736 (2020)
In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing stu
Externí odkaz:
https://doaj.org/article/ca0de7dfc21b4d0ba31d1315dbc7a084
Emergence, broadly conceptualized as the ``intelligent'' behaviors of LLMs, has recently been studied and proved challenging to quantify due to the lack of a measurable definition. Most commonly, it has been estimated statistically through model perf
Externí odkaz:
http://arxiv.org/abs/2405.12617
In-context Learning (ICL) has emerged as a powerful capability alongside the development of scaled-up large language models (LLMs). By instructing LLMs using few-shot demonstrative examples, ICL enables them to perform a wide range of tasks without u
Externí odkaz:
http://arxiv.org/abs/2404.07546
Large Language Models (LLMs) are emerging as promising approaches to enhance session-based recommendation (SBR), where both prompt-based and fine-tuning-based methods have been widely investigated to align LLMs with SBR. However, the former methods s
Externí odkaz:
http://arxiv.org/abs/2403.16427
Dense retrievers and retrieval-augmented language models have been widely used in various NLP applications. Despite being designed to deliver reliable and secure outcomes, the vulnerability of retrievers to potential attacks remains unclear, raising
Externí odkaz:
http://arxiv.org/abs/2402.13532
Large language models (LLMs) have demonstrated remarkable proficiency in understanding and generating responses to complex queries through large-scale pre-training. However, the efficacy of these models in memorizing and reasoning among large-scale s
Externí odkaz:
http://arxiv.org/abs/2402.14273