Zobrazeno 1 - 10
of 254
pro vyhledávání: '"Madaio, Michael P"'
Autor:
Constantinides, Marios, Tahaei, Mohammad, Quercia, Daniele, Stumpf, Simone, Madaio, Michael, Kennedy, Sean, Wilcox, Lauren, Vitak, Jessica, Cramer, Henriette, Bogucka, Edyta, Baeza-Yates, Ricardo, Luger, Ewa, Holbrook, Jess, Muller, Michael, Blumenfeld, Ilana Golbin, Pistilli, Giada
With the upcoming AI regulations (e.g., EU AI Act) and rapid advancements in generative AI, new challenges emerge in the area of Human-Centered Responsible Artificial Intelligence (HCR-AI). As AI becomes more ubiquitous, questions around decision-mak
Externí odkaz:
http://arxiv.org/abs/2403.00148
Prompt-based interfaces for Large Language Models (LLMs) have made prototyping and building AI-powered applications easier than ever before. However, identifying potential harms that may arise from AI applications remains a challenge, particularly du
Externí odkaz:
http://arxiv.org/abs/2402.15350
Responsible design of AI systems is a shared goal across HCI and AI communities. Responsible AI (RAI) tools have been developed to support practitioners to identify, assess, and mitigate ethical issues during AI development. These tools take many for
Externí odkaz:
http://arxiv.org/abs/2401.17486
Despite the growing consensus that stakeholders affected by AI systems should participate in their design, enormous variation and implicit disagreements exist among current approaches. For researchers and practitioners who are interested in taking a
Externí odkaz:
http://arxiv.org/abs/2310.00907
Autor:
Diaz, Fernando, Madaio, Michael
Recent work has advocated for training AI models on ever-larger datasets, arguing that as the size of a dataset increases, the performance of a model trained on that dataset will correspondingly increase (referred to as "scaling laws"). In this paper
Externí odkaz:
http://arxiv.org/abs/2307.03201
Autor:
Deng, Wesley Hanwen, Yildirim, Nur, Chang, Monica, Eslami, Motahhare, Holstein, Ken, Madaio, Michael
An emerging body of research indicates that ineffective cross-functional collaboration -- the interdisciplinary work done by industry practitioners across roles -- represents a major barrier to addressing issues of fairness in AI design and developme
Externí odkaz:
http://arxiv.org/abs/2306.06542
Fairlearn is an open source project to help practitioners assess and improve fairness of artificial intelligence (AI) systems. The associated Python library, also named fairlearn, supports evaluation of a model's output across affected populations an
Externí odkaz:
http://arxiv.org/abs/2303.16626
Autor:
Tahaei, Mohammad, Constantinides, Marios, Quercia, Daniele, Kennedy, Sean, Muller, Michael, Stumpf, Simone, Liao, Q. Vera, Baeza-Yates, Ricardo, Aroyo, Lora, Holbrook, Jess, Luger, Ewa, Madaio, Michael, Blumenfeld, Ilana Golbin, De-Arteaga, Maria, Vitak, Jessica, Olteanu, Alexandra
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence. While different research communities may use different terminology to discuss similar topics, all of this work is ultima
Externí odkaz:
http://arxiv.org/abs/2302.08157
Numerous toolkits have been developed to support ethical AI development. However, toolkits, like all tools, encode assumptions in their design about what work should be done and how. In this paper, we conduct a qualitative analysis of 27 AI ethics to
Externí odkaz:
http://arxiv.org/abs/2202.08792
Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support
Autor:
Madaio, Michael, Egede, Lisa, Subramonyam, Hariharan, Vaughan, Jennifer Wortman, Wallach, Hanna
Various tools and practices have been developed to support practitioners in identifying, assessing, and mitigating fairness-related harms caused by AI systems. However, prior research has highlighted gaps between the intended design of these tools an
Externí odkaz:
http://arxiv.org/abs/2112.05675