Zobrazeno 1 - 10
of 95
pro vyhledávání: '"Wang, Lucy Lu"'
Autor:
Wen, Bingbing, Yao, Jihan, Feng, Shangbin, Xu, Chenjun, Tsvetkov, Yulia, Howe, Bill, Wang, Lucy Lu
Abstention, the refusal of large language models (LLMs) to provide an answer, is increasingly recognized for its potential to mitigate hallucinations and enhance safety in LLM systems. In this survey, we introduce a framework to examine abstention fr
Externí odkaz:
http://arxiv.org/abs/2407.18418
Autor:
Hsu, Chao-Chun, Bransom, Erin, Sparks, Jenna, Kuehl, Bailey, Tan, Chenhao, Wadden, David, Wang, Lucy Lu, Naik, Aakanksha
Literature review requires researchers to synthesize a large amount of information and is increasingly challenging as the scientific literature expands. In this work, we investigate the potential of LLMs for producing hierarchical organizations of sc
Externí odkaz:
http://arxiv.org/abs/2407.16148
Topic pages aggregate useful information about an entity or concept into a single succinct and accessible article. Automated creation of topic pages would enable their rapid curation as information resources, providing an alternative to traditional w
Externí odkaz:
http://arxiv.org/abs/2405.01796
The correct model response in the face of uncertainty is to abstain from answering a question so as not to mislead the user. In this work, we study the ability of LLMs to abstain from answering context-dependent science questions when provided insuff
Externí odkaz:
http://arxiv.org/abs/2404.12452
Publikováno v:
In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), May 11-16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA
Communicating design implications is common within the HCI community when publishing academic papers, yet these papers are rarely read and used by designers. One solution is to use design cards as a form of translational resource that communicates va
Externí odkaz:
http://arxiv.org/abs/2403.08137
Ethical frameworks for the use of natural language processing (NLP) are urgently needed to shape how large language models (LLMs) and similar tools are used for healthcare applications. Healthcare faces existing challenges including the balance of po
Externí odkaz:
http://arxiv.org/abs/2312.11803
Autor:
Guo, Yue, Chang, Joseph Chee, Antoniak, Maria, Bransom, Erin, Cohen, Trevor, Wang, Lucy Lu, August, Tal
Scientific jargon can impede researchers when they read materials from other domains. Current methods of jargon identification mainly use corpus-level familiarity indicators (e.g., Simple Wikipedia represents plain language). However, researchers' fa
Externí odkaz:
http://arxiv.org/abs/2311.09481
In recent years, funding agencies and journals increasingly advocate for open science practices (e.g. data and method sharing) to improve the transparency, access, and reproducibility of science. However, quantifying these practices at scale has prov
Externí odkaz:
http://arxiv.org/abs/2310.03193
Autor:
Wang, Lucy Lu, Otmakhova, Yulia, DeYoung, Jay, Truong, Thinh Hung, Kuehl, Bailey E., Bransom, Erin, Wallace, Byron C.
Evaluating multi-document summarization (MDS) quality is difficult. This is especially true in the case of MDS for biomedical literature reviews, where models must synthesize contradicting evidence reported across different documents. Prior work has
Externí odkaz:
http://arxiv.org/abs/2305.13693
While there has been significant development of models for Plain Language Summarization (PLS), evaluation remains a challenge. PLS lacks a dedicated assessment metric, and the suitability of text generation evaluation metrics is unclear due to the un
Externí odkaz:
http://arxiv.org/abs/2305.14341