Zobrazeno 1 - 10
of 133
pro vyhledávání: '"Chen, Jifan"'
Autor:
Wu, Zhengxuan, Zhang, Yuhao, Qi, Peng, Xu, Yumo, Han, Rujun, Zhang, Yian, Chen, Jifan, Min, Bonan, Huang, Zhiheng
Modern language models (LMs) need to follow human instructions while being faithful; yet, they often fail to achieve both. Here, we provide concrete evidence of a trade-off between instruction following (i.e., follow open-ended instructions) and fait
Externí odkaz:
http://arxiv.org/abs/2407.21417
The rise of large language models (LLMs) has brought a critical need for high-quality human-labeled data, particularly for processes like human feedback and evaluation. A common practice is to label data via consensus annotation over human judgments.
Externí odkaz:
http://arxiv.org/abs/2305.14770
Evidence retrieval is a core part of automatic fact-checking. Prior work makes simplifying assumptions in retrieval that depart from real-world use cases: either no access to evidence, access to evidence curated by a human fact-checker, or access to
Externí odkaz:
http://arxiv.org/abs/2305.11859
Autor:
Chen, Jifan, Zhang, Yuhao, Liu, Lan, Dong, Rui, Chen, Xinchi, Ng, Patrick, Wang, William Yang, Huang, Zhiheng
There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022). However, existing methods typically encode task information with a simple dataset name as a
Externí odkaz:
http://arxiv.org/abs/2212.08780
Autor:
Kovatchev, Venelin, Chatterjee, Trina, Govindarajan, Venkata S, Chen, Jifan, Choi, Eunsol, Chronis, Gabriella, Das, Anubrata, Erk, Katrin, Lease, Matthew, Li, Junyi Jessy, Wu, Yating, Mahowald, Kyle
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability. Here, we describe the approach of the team "longhorns" on Task 1 of the The First Workshop on Dynamic Adversari
Externí odkaz:
http://arxiv.org/abs/2206.14729
Publikováno v:
EMNLP 2022
Verifying complex political claims is a challenging task, especially when politicians use various tactics to subtly misrepresent the facts. Automatic fact-checking systems fall short here, and their predictions like "half-true" are not very useful in
Externí odkaz:
http://arxiv.org/abs/2205.06938
Autor:
Chen, Jifan, Escoffre, Jean-Michel, Romito, Oliver, Iazourene, Tarik, Presset, Antoine, Roy, Marie, Potier Cartereau, Marie, Vandier, Christophe, Wang, Yahua, Wang, Guowei, Huang, Pintong, Bouakaz, Ayache
Publikováno v:
In Ultrasonics Sonochemistry February 2024 103
To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just "good enough" in the context of imperfect QA datasets. We explore the use of natural language inference (NLI) as a way
Externí odkaz:
http://arxiv.org/abs/2104.08731
Autor:
Chen, Jifan, Durrett, Greg
Current textual question answering models achieve strong performance on in-domain test sets, but often do so by fitting surface-level patterns in the data, so they fail to generalize to out-of-distribution settings. To make a more robust and understa
Externí odkaz:
http://arxiv.org/abs/2004.14648
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.