Zobrazeno 1 - 10
of 96
pro vyhledávání: '"Wang, Lucy Lu"'
In the absence of abundant reliable annotations for challenging tasks and contexts, how can we expand the frontier of LLM capabilities with potentially wrong answers? We focus on two research questions: (1) Can LLMs generate reliable preferences amon
Externí odkaz:
http://arxiv.org/abs/2410.11055
Autor:
Kumar, Anukriti, Wang, Lucy Lu
Most scholarly works are distributed online in PDF format, which can present significant accessibility challenges for blind and low-vision readers. To characterize the scope of this issue, we perform a large-scale analysis of 20K open- and closed-acc
Externí odkaz:
http://arxiv.org/abs/2410.03022
Autor:
Wen, Bingbing, Yao, Jihan, Feng, Shangbin, Xu, Chenjun, Tsvetkov, Yulia, Howe, Bill, Wang, Lucy Lu
Abstention, the refusal of large language models (LLMs) to provide an answer, is increasingly recognized for its potential to mitigate hallucinations and enhance safety in LLM systems. In this survey, we introduce a framework to examine abstention fr
Externí odkaz:
http://arxiv.org/abs/2407.18418
Autor:
Hsu, Chao-Chun, Bransom, Erin, Sparks, Jenna, Kuehl, Bailey, Tan, Chenhao, Wadden, David, Wang, Lucy Lu, Naik, Aakanksha
Literature review requires researchers to synthesize a large amount of information and is increasingly challenging as the scientific literature expands. In this work, we investigate the potential of LLMs for producing hierarchical organizations of sc
Externí odkaz:
http://arxiv.org/abs/2407.16148
Topic pages aggregate useful information about an entity or concept into a single succinct and accessible article. Automated creation of topic pages would enable their rapid curation as information resources, providing an alternative to traditional w
Externí odkaz:
http://arxiv.org/abs/2405.01796
The correct model response in the face of uncertainty is to abstain from answering a question so as not to mislead the user. In this work, we study the ability of LLMs to abstain from answering context-dependent science questions when provided insuff
Externí odkaz:
http://arxiv.org/abs/2404.12452
Publikováno v:
In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), May 11-16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA
Communicating design implications is common within the HCI community when publishing academic papers, yet these papers are rarely read and used by designers. One solution is to use design cards as a form of translational resource that communicates va
Externí odkaz:
http://arxiv.org/abs/2403.08137
Ethical frameworks for the use of natural language processing (NLP) are urgently needed to shape how large language models (LLMs) and similar tools are used for healthcare applications. Healthcare faces existing challenges including the balance of po
Externí odkaz:
http://arxiv.org/abs/2312.11803
Autor:
Guo, Yue, Chang, Joseph Chee, Antoniak, Maria, Bransom, Erin, Cohen, Trevor, Wang, Lucy Lu, August, Tal
Scientific jargon can impede researchers when they read materials from other domains. Current methods of jargon identification mainly use corpus-level familiarity indicators (e.g., Simple Wikipedia represents plain language). However, researchers' fa
Externí odkaz:
http://arxiv.org/abs/2311.09481
In recent years, funding agencies and journals increasingly advocate for open science practices (e.g. data and method sharing) to improve the transparency, access, and reproducibility of science. However, quantifying these practices at scale has prov
Externí odkaz:
http://arxiv.org/abs/2310.03193