Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Stapleton, Logan"'
The field of digital mental health is advancing at a rapid pace. Passively collected data from user engagements with digital tools and services continue to contribute new insights into mental health and illness. As the field of digital mental health
Externí odkaz:
http://arxiv.org/abs/2404.14548
Large generative AI models (GMs) like GPT and DALL-E are trained to generate content for general, wide-ranging purposes. GM content filters are generalized to filter out content which has a risk of harm in many cases, e.g., hate speech. However, proh
Externí odkaz:
http://arxiv.org/abs/2306.03097
Autor:
Stapleton, Logan, Lee, Min Hun, Qing, Diana, Wright, Marya, Chouldechova, Alexandra, Holstein, Kenneth, Wu, Zhiwei Steven, Zhu, Haiyi
Child welfare agencies across the United States are turning to data-driven predictive technologies (commonly called predictive analytics) which use government administrative data to assist workers' decision-making. While some prior work has explored
Externí odkaz:
http://arxiv.org/abs/2205.08928
Autor:
Stapleton, Logan, Cheng, Hao-Fei, Kawakami, Anna, Sivaraman, Venkatesh, Cheng, Yanghuidi, Qing, Diana, Perer, Adam, Holstein, Kenneth, Wu, Zhiwei Steven, Zhu, Haiyi
This is an extended analysis of our paper "How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions," which looks at racial disparities in the Allegheny Family Screening Tool, an algorithm used to help child welfare workers decide
Externí odkaz:
http://arxiv.org/abs/2204.13872
Autor:
Akpinar, Nil-Jana, Nagireddy, Manish, Stapleton, Logan, Cheng, Hao-Fei, Zhu, Haiyi, Wu, Steven, Heidari, Hoda
Motivated by the growing importance of reducing unfairness in ML predictions, Fair-ML researchers have presented an extensive suite of algorithmic 'fairness-enhancing' remedies. Most existing algorithms, however, are agnostic to the sources of the ob
Externí odkaz:
http://arxiv.org/abs/2204.10233
Autor:
Kawakami, Anna, Sivaraman, Venkatesh, Cheng, Hao-Fei, Stapleton, Logan, Cheng, Yanghuidi, Qing, Diana, Perer, Adam, Wu, Zhiwei Steven, Zhu, Haiyi, Holstein, Kenneth
AI-based decision support tools (ADS) are increasingly used to augment human decision-making in high-stakes, social contexts. As public sector agencies begin to adopt ADS, it is critical that we understand workers' experiences with these systems in p
Externí odkaz:
http://arxiv.org/abs/2204.02310
Randomized experiments can be susceptible to selection bias due to potential non-compliance by the participants. While much of the existing work has studied compliance as a static behavior, we propose a game-theoretic model to study compliance as dyn
Externí odkaz:
http://arxiv.org/abs/2107.10093
Strategic Instrumental Variable Regression: Recovering Causal Relationships From Strategic Responses
In settings where Machine Learning (ML) algorithms automate or inform consequential decisions about people, individual decision subjects are often incentivized to strategically modify their observable attributes to receive more favorable predictions.
Externí odkaz:
http://arxiv.org/abs/2107.05762
Autor:
Cheng, Hao-Fei, Stapleton, Logan, Wang, Ruiqi, Bullock, Paige, Chouldechova, Alexandra, Wu, Zhiwei Steven, Zhu, Haiyi
Recent work in fair machine learning has proposed dozens of technical definitions of algorithmic fairness and methods for enforcing these definitions. However, we still lack an understanding of how to develop machine learning systems with fairness cr
Externí odkaz:
http://arxiv.org/abs/2102.01196
Autor:
Jung, Christopher, Kearns, Michael, Neel, Seth, Roth, Aaron, Stapleton, Logan, Wu, Zhiwei Steven
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or colle
Externí odkaz:
http://arxiv.org/abs/1905.10660