Zobrazeno 1 - 10
of 47
pro vyhledávání: '"Jennifer Wortman Vaughan"'
Autor:
Charvi Rastogi, Ivan Stelmakh, Alina Beygelzimer, Yann N Dauphin, Percy Liang, Jennifer Wortman Vaughan, Zhenyu Xue, Hal Daumé Iii, Emma Pierson, Nihar B Shah
Publikováno v:
PLoS ONE, Vol 19, Iss 4, p e0300710 (2024)
How do author perceptions match up to the outcomes of the peer-review process and perceptions of others? In a top-tier computer science conference (NeurIPS 2021) with more than 23,000 submitting authors and 9,000 submitted papers, we surveyed the aut
Externí odkaz:
https://doaj.org/article/71cb7dcdf65a420d86a4fd1696eb9441
Publikováno v:
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
Machine learning (ML) recourse techniques are increasingly used in high-stakes domains, providing end users with actions to alter ML predictions, but they assume ML developers understand what input variables can be changed. However, a recourse plan's
Publikováno v:
Management Science, 63 (3)
We initiate the study of incentive-compatible forecasting competitions in which multiple forecasters make predictions about one or more events and compete for a single prize. We have two objectives: (1) to incentivize forecasters to report truthfully
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6ea10490226f3f380afeed3437281483
https://hdl.handle.net/20.500.11850/559971
https://hdl.handle.net/20.500.11850/559971
Despite the widespread use of artificial intelligence (AI), designing user experiences (UX) for AI-powered systems remains challenging. UX designers face hurdles understanding AI technologies, such as pre-trained language models, as design materials.
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::427e360193db2e6a5aac7f48f7c2002a
Autor:
Hanna Wallach, Jennifer Wortman Vaughan, Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Kate Crawford, Hal Daumé
Publikováno v:
Communications of the ACM. 64:86-92
The machine learning community currently has no standardized process for documenting datasets, which can lead to severe consequences in high-stakes domains. To address this gap, we propose datasheets for datasets. In the electronics industry, every c
Data is central to the development and evaluation of machine learning (ML) models. However, the use of problematic or inappropriate datasets can result in harms when the resulting models are deployed. To encourage responsible AI practice through more
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::61584b4b9ae468b37f1a43cd15e9575c
http://arxiv.org/abs/2206.02923
http://arxiv.org/abs/2206.02923
Autor:
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé, Kate Crawford
Publikováno v:
Ethics of Data and Analytics ISBN: 9781003278290
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::c145631803512c99d19d48de7dbaced8
https://doi.org/10.1201/9781003278290-23
https://doi.org/10.1201/9781003278290-23
Autor:
Benjamin Fish, Jennifer Wortman Vaughan, Luke Stark, Forough Poursabzi-Sangdeh, Kate Crawford, Asia J. Biega, Alexandra Olteanu, Brent Hecht, Miroslav Dudík, Margarita Boyarskaya, Hanna Wallach, Marion Zepf, Hal Daumé, Mary L. Gray, Solon Barocas
Publikováno v:
Communications of the ACM. 64:30-32
The COVID-19 pandemic has both created and exacerbated a series of cascading and interrelated crises whose impacts continue to reverberate. From the immediate effects on people's health to the pressures on healthcare systems and mass unemployment, mi
Autor:
Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana
Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions--potentially causing harms once deployed. However, how to take action to address these patterns is not always clear. In
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b8e96183869b4a40fc870bce5dd0f96d
Publikováno v:
Machines We Trust ISBN: 9780262366212
To build machine learning systems that are reliable, trustworthy, and fair, we must be able to provide relevant stakeholders with an understanding of how these systems work. Yet what makes a system “intelligible” is difficult to pin down. Intelli
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::625024dee9f97c8327d450ed98f1efef
https://doi.org/10.7551/mitpress/12186.003.0014
https://doi.org/10.7551/mitpress/12186.003.0014