Zobrazeno 1 - 10
of 24
pro vyhledávání: '"Maria De-Arteaga"'
Autor:
Sina Fazelpour, Maria De-Arteaga
Publikováno v:
Big Data & Society, Vol 9 (2022)
There has been a surge of recent interest in sociocultural diversity in machine learning research. Currently, however, there is a gap between discussions of measures and benefits of diversity in machine learning, on the one hand, and the broader rese
Externí odkaz:
https://doaj.org/article/17190a0a04314fc08ac35aebbac13586
Publikováno v:
Resuscitation Plus, Vol 8, Iss , Pp 100185- (2021)
Background: We explored sex-based differences in discharge location after resuscitation from cardiac arrest. Methods: We performed a single-center retrospective cohort study including patients hospitalized after resuscitation from cardiac arrest from
Externí odkaz:
https://doaj.org/article/d95d6e12483849788d0edce2baf25252
Autor:
Maria De-Arteaga, Jieshi Chen, Peter Huggins, Jonathan Elmer, Gilles Clermont, Artur Dubrawski
Publikováno v:
PLoS ONE, Vol 14, Iss 1, p e0210966 (2019)
Early prediction of the potential for neurological recovery after resuscitation from cardiac arrest is difficult but important. Currently, no clinical finding or combination of findings are sufficient to accurately predict or preclude favorable recov
Externí odkaz:
https://doaj.org/article/ae385a0a879d437797211fbabf3ba23d
Publikováno v:
Proceedings of the ACM on Human-Computer Interaction. 7:1-20
In many real world contexts, successful human-AI collaboration requires humans to productively integrate complementary sources of information into AI-informed decisions. However, in practice human decision-makers often lack understanding of what info
Publikováno v:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. 10:133-146
An increased awareness concerning risks of algorithmic bias has driven a surge of efforts around bias mitigation strategies. A vast majority of the proposed approaches fall under one of two categories: (1) imposing algorithmic fairness constraints on
Publikováno v:
Production and Operations Management. 31:3749-3770
Publikováno v:
Proceedings of the ACM Web Conference 2023.
Algorithmic bias often arises as a result of differential subgroup validity, in which predictive relationships vary across groups. For example, in toxic language detection, comments targeting different demographic groups can vary markedly across grou
Autor:
Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael Muller, Simone Stumpf, Q. Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, Ewa Luger, Michael Madaio, Ilana Golbin Blumenfeld, Maria De-Arteaga, Jessica Vitak, Alexandra Olteanu
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence. While different research communities may use different terminology to discuss similar topics, all of this work is ultima
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::63f6c66f6f8d314dec26ab5d9ccdf953
http://arxiv.org/abs/2302.08157
http://arxiv.org/abs/2302.08157
Publikováno v:
Data Mining and Knowledge Discovery.
Many modern machine learning algorithms mitigate bias by enforcing fairness constraints across coarsely-defined groups related to a sensitive attribute like gender or race. However, these algorithms seldom account for within-group heterogeneity and b
Publikováno v:
SSRN Electronic Journal.