Zobrazeno 1 - 10
of 35
pro vyhledávání: '"MIN, BONAN"'
Autor:
Liu, Siyi, Ning, Qiang, Halder, Kishaloy, Xiao, Wei, Qi, Zheng, Htut, Phu Mon, Zhang, Yi, John, Neha Anna, Min, Bonan, Benajiba, Yassine, Roth, Dan
Open domain question answering systems frequently rely on information retrieved from large collections of text (such as the Web) to answer questions. However, such collections of text often contain conflicting information, and indiscriminately depend
Externí odkaz:
http://arxiv.org/abs/2410.12311
Autor:
Kandula, Hemanth, Karakos, Damianos, Qiu, Haoling, Rozonoyer, Benjamin, Soboroff, Ian, Tarlin, Lee, Min, Bonan
Frequently, users of an Information Retrieval (IR) system start with an overarching information need (a.k.a., an analytic task) and proceed to define finer-grained queries covering various important aspects (i.e., sub-topics) of that analytic task. W
Externí odkaz:
http://arxiv.org/abs/2409.04667
Autor:
Wu, Zhengxuan, Zhang, Yuhao, Qi, Peng, Xu, Yumo, Han, Rujun, Zhang, Yian, Chen, Jifan, Min, Bonan, Huang, Zhiheng
Modern language models (LMs) need to follow human instructions while being faithful; yet, they often fail to achieve both. Here, we provide concrete evidence of a trade-off between instruction following (i.e., follow open-ended instructions) and fait
Externí odkaz:
http://arxiv.org/abs/2407.21417
Autor:
Han, Rujun, Zhang, Yuhao, Qi, Peng, Xu, Yumo, Wang, Jenyuan, Liu, Lan, Wang, William Yang, Min, Bonan, Castelli, Vittorio
Question answering based on retrieval augmented generation (RAG-QA) is an important research topic in NLP and has a wide range of real-world applications. However, most existing datasets for this task are either constructed using a single source corp
Externí odkaz:
http://arxiv.org/abs/2407.13998
Autor:
Yuan, Jiaqing, Pan, Lin, Hang, Chung-Wei, Guo, Jiang, Jiang, Jiarong, Min, Bonan, Ng, Patrick, Wang, Zhiguo
Large language models (LLMs) have shown remarkable performance on a variety of NLP tasks, and are being rapidly adopted in a wide range of use cases. It is therefore of vital importance to holistically evaluate the factuality of their generated outpu
Externí odkaz:
http://arxiv.org/abs/2404.16164
Autor:
Wang, Fei, Shang, Chao, Jain, Sarthak, Wang, Shuai, Ning, Qiang, Min, Bonan, Castelli, Vittorio, Benajiba, Yassine, Roth, Dan
User alignment is crucial for adapting general-purpose language models (LMs) to downstream tasks, but human annotations are often not available for all types of instructions, especially those with customized constraints. We observe that user instruct
Externí odkaz:
http://arxiv.org/abs/2403.06326
Autor:
Fujinuma, Yoshinari, Varia, Siddharth, Sankaran, Nishant, Appalaraju, Srikar, Min, Bonan, Vyas, Yogarshi
Document image classification is different from plain-text document classification and consists of classifying a document by understanding the content and structure of documents such as forms, emails, and other such documents. We show that the only e
Externí odkaz:
http://arxiv.org/abs/2310.16356
Autor:
Li, Alexander Hanbo, Shang, Mingyue, Spiliopoulou, Evangelia, Ma, Jie, Ng, Patrick, Wang, Zhiguo, Min, Bonan, Wang, William, McKeown, Kathleen, Castelli, Vittorio, Roth, Dan, Xiang, Bing
We present a novel approach for structured data-to-text generation that addresses the limitations of existing methods that primarily focus on specific types of structured data. Our proposed method aims to improve performance in multi-task training, z
Externí odkaz:
http://arxiv.org/abs/2308.05317
Recent work has shown that NLP tasks such as Relation Extraction (RE) can be recasted as Textual Entailment tasks using verbalizations, with strong performance in zero-shot and few-shot settings thanks to pre-trained entailment models. The fact that
Externí odkaz:
http://arxiv.org/abs/2205.01376
The current workflow for Information Extraction (IE) analysts involves the definition of the entities/relations of interest and a training corpus with annotated examples. In this demonstration we introduce a new workflow where the analyst directly ve
Externí odkaz:
http://arxiv.org/abs/2203.13602