Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Buçinca, Zana"'
Autor:
Buçinca, Zana, Swaroop, Siddharth, Paluch, Amanda E., Doshi-Velez, Finale, Gajos, Krzysztof Z.
People's decision-making abilities often fail to improve or may even erode when they rely on AI for decision-support, even when the AI provides informative explanations. We argue this is partly because people intuitively seek contrastive explanations
Externí odkaz:
http://arxiv.org/abs/2410.04253
Numerous approaches have been recently proposed for learning fair representations that mitigate unfair outcomes in prediction tasks. A key motivation for these methods is that the representations can be used by third parties with unknown objectives.
Externí odkaz:
http://arxiv.org/abs/2406.16698
Imagine if AI decision-support tools not only complemented our ability to make accurate decisions, but also improved our skills, boosted collaboration, and elevated the joy we derive from our tasks. Despite the potential to optimize a broad spectrum
Externí odkaz:
http://arxiv.org/abs/2403.05911
In settings where users both need high accuracy and are time-pressured, such as doctors working in emergency rooms, we want to provide AI assistance that both increases decision accuracy and reduces decision-making time. Current literature focusses o
Externí odkaz:
http://arxiv.org/abs/2306.07458
Autor:
Buçinca, Zana, Pham, Chau Minh, Jakesch, Maurice, Ribeiro, Marco Tulio, Olteanu, Alexandra, Amershi, Saleema
While demands for change and accountability for harmful AI consequences mount, foreseeing the downstream effects of deploying AI systems remains a challenging task. We developed AHA! (Anticipating Harms of AI), a generative framework to assist AI pra
Externí odkaz:
http://arxiv.org/abs/2306.03280
Publikováno v:
2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22), June 21-24, 2022, Seoul, Republic of Korea
Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible artificial intelligence technologies. While their recommendations converge on a set of central values, little is
Externí odkaz:
http://arxiv.org/abs/2205.07722
People supported by AI-powered decision support tools frequently overrely on the AI: they accept an AI's suggestion even when that suggestion is wrong. Adding explanations to the AI decisions does not appear to reduce the overreliance and some studie
Externí odkaz:
http://arxiv.org/abs/2102.09692
Due to its expressivity, natural language is paramount for explicit and implicit affective state communication among humans. The same linguistic inquiry (e.g., How are you?) might induce responses with different affects depending on the affective sta
Externí odkaz:
http://arxiv.org/abs/2012.06847
Explainable artificially intelligent (XAI) systems form part of sociotechnical systems, e.g., human+AI teams tasked with making decisions. Yet, current XAI systems are rarely evaluated by measuring the performance of human+AI teams on actual decision
Externí odkaz:
http://arxiv.org/abs/2001.08298
In settings where users are both time-pressured and need high accuracy, such as doctors working in Emergency Rooms, we want to provide AI assistance that both increases accuracy and reduces time. However, different types of AI assistance have differe
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7f2766e3e3c28f20d95119f119e4c5b6
http://arxiv.org/abs/2306.07458
http://arxiv.org/abs/2306.07458