Zobrazeno 1 - 10
of 47
pro vyhledávání: '"Dugan, Casey"'
Autor:
Ashktorab, Zahra, Pan, Qian, Geyer, Werner, Desmond, Michael, Danilevsky, Marina, Johnson, James M., Dugan, Casey, Bachman, Michelle
In this paper, we investigate the impact of hallucinations and cognitive forcing functions in human-AI collaborative text generation tasks, focusing on the use of Large Language Models (LLMs) to assist in generating high-quality conversational data.
Externí odkaz:
http://arxiv.org/abs/2409.08937
Autor:
Do, Hyo Jin, Ostrand, Rachel, Weisz, Justin D., Dugan, Casey, Sattigeri, Prasanna, Wei, Dennis, Murugesan, Keerthiram, Geyer, Werner
While humans increasingly rely on large language models (LLMs), they are susceptible to generating inaccurate or false information, also known as "hallucinations". Technical advancements have been made in algorithms that detect hallucinated content b
Externí odkaz:
http://arxiv.org/abs/2405.20434
Autor:
Hsu, Shang-Ling, Shah, Raj Sanjay, Senthil, Prathik, Ashktorab, Zahra, Dugan, Casey, Geyer, Werner, Yang, Diyi
Millions of users come to online peer counseling platforms to seek support on diverse topics ranging from relationship stress to anxiety. However, studies show that online peer support groups are not always as effective as expected largely due to use
Externí odkaz:
http://arxiv.org/abs/2305.08982
Autor:
Ashktorab, Zahra, Hoover, Benjamin, Agarwal, Mayank, Dugan, Casey, Geyer, Werner, Yang, Hao Bang, Yurochkin, Mikhail
Mitigating algorithmic bias is a critical task in the development and deployment of machine learning models. While several toolkits exist to aid machine learning practitioners in addressing fairness issues, little is known about the strategies practi
Externí odkaz:
http://arxiv.org/abs/2303.00673
Autor:
Desmond, Michael, Ashktorab, Zahra, Brachman, Michelle, Brimijoin, Kristina, Duesterwald, Evelyn, Dugan, Casey, Finegan-Dollak, Catherine, Muller, Michael, Joshi, Narendra Nath, Pan, Qian, Sharma, Aabhas
Labeling data is an important step in the supervised machine learning lifecycle. It is a laborious human activity comprised of repeated decision making: the human labeler decides which of several potential labels to apply to each example. Prior work
Externí odkaz:
http://arxiv.org/abs/2104.04122
Autor:
Wang, April Yi, Wang, Dakuo, Drozdal, Jaimie, Muller, Michael, Park, Soya, Weisz, Justin D., Liu, Xuye, Wu, Lingfei, Dugan, Casey
Publikováno v:
ACM Trans. Comput.-Hum. Interact. 29, 2, Article 17 (April 2022), 33 pages
Computational notebooks allow data scientists to express their ideas through a combination of code and documentation. However, data scientists often pay attention only to the code, and neglect creating or updating their documentation during quick ite
Externí odkaz:
http://arxiv.org/abs/2102.12592
Data science (DS) projects often follow a lifecycle that consists of laborious tasks for data scientists and domain experts (e.g., data exploration, model training, etc.). Only till recently, machine learning(ML) researchers have developed promising
Externí odkaz:
http://arxiv.org/abs/2101.05273
Autor:
Mao, Yaoli, Wang, Dakuo, Muller, Michael, Varshney, Kush R., Baldini, Ioana, Dugan, Casey, AleksandraMojsilović
In recent years there has been an increasing trend in which data scientists and domain experts work together to tackle complex scientific questions. However, such collaborations often face challenges. In this paper, we aim to decipher this collaborat
Externí odkaz:
http://arxiv.org/abs/1909.03486
Autor:
Wang, Dakuo, Weisz, Justin D., Muller, Michael, Ram, Parikshit, Geyer, Werner, Dugan, Casey, Tausczik, Yla, Samulowitz, Horst, Gray, Alexander
The rapid advancement of artificial intelligence (AI) is changing our lives in many ways. One application domain is data science. New techniques in automating the creation of AI, known as AutoAI or AutoML, aim to automate the work practices of data s
Externí odkaz:
http://arxiv.org/abs/1909.02309
Ensuring fairness of machine learning systems is a human-in-the-loop process. It relies on developers, users, and the general public to identify fairness problems and make improvements. To facilitate the process we need effective, unbiased, and user-
Externí odkaz:
http://arxiv.org/abs/1901.07694