Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Andrews, Jerone T. A."'
Autor:
Zarlenga, Mateo Espinosa, Sankaranarayanan, Swami, Andrews, Jerone T. A., Shams, Zohreh, Jamnik, Mateja, Xiang, Alice
Deep neural networks trained via empirical risk minimisation often exhibit significant performance disparities across groups, particularly when group and task labels are spuriously correlated (e.g., "grassy background" and "cows"). Existing bias miti
Externí odkaz:
http://arxiv.org/abs/2409.17691
Machine learning (ML) datasets, often perceived as neutral, inherently encapsulate abstract and disputed social constructs. Dataset curators frequently employ value-laden terms such as diversity, bias, and quality to characterize datasets. Despite th
Externí odkaz:
http://arxiv.org/abs/2407.08188
Autor:
Hirota, Yusuke, Andrews, Jerone T. A., Zhao, Dora, Papakyriakopoulos, Orestis, Modas, Apostolos, Nakashima, Yuta, Xiang, Alice
We tackle societal bias in image-text datasets by removing spurious correlations between protected groups and image attributes. Traditional methods only target labeled attributes, ignoring biases from unlabeled ones. Using text-guided inpainting mode
Externí odkaz:
http://arxiv.org/abs/2407.03623
Autor:
Zhao, Dora, Scheuerman, Morgan Klaus, Chitre, Pooja, Andrews, Jerone T. A., Panagiotidou, Georgia, Walker, Shawn, Pine, Kathleen H., Xiang, Alice
Despite extensive efforts to create fairer machine learning (ML) datasets, there remains a limited understanding of the practical aspects of dataset curation. Drawing from interviews with 30 ML dataset curators, we present a comprehensive taxonomy of
Externí odkaz:
http://arxiv.org/abs/2406.06407
Few datasets contain self-identified sensitive attributes, inferring attributes risks introducing additional biases, and collecting attributes can carry legal risks. Besides, categorical labels can fail to reflect the continuous nature of human pheno
Externí odkaz:
http://arxiv.org/abs/2303.17176
Autor:
Andrews, Jerone T. A., Zhao, Dora, Thong, William, Modas, Apostolos, Papakyriakopoulos, Orestis, Xiang, Alice
Human-centric computer vision (HCCV) data curation practices often neglect privacy and bias concerns, leading to dataset retractions and unfair models. HCCV datasets constructed through nonconsensual web scraping lack crucial metadata for comprehensi
Externí odkaz:
http://arxiv.org/abs/2302.03629
As computer vision systems become more widely deployed, there is increasing concern from both the research community and the public that these systems are not only reproducing but amplifying harmful social biases. The phenomenon of bias amplification
Externí odkaz:
http://arxiv.org/abs/2210.11924
Donations to charity-based crowdfunding environments have been on the rise in the last few years. Unsurprisingly, deception and fraud in such platforms have also increased, but have not been thoroughly studied to understand what characteristics can e
Externí odkaz:
http://arxiv.org/abs/2006.16849
The model of camera that was used to capture a particular photographic image (model attribution) is typically inferred from high-frequency model-specific artifacts present within the image. Model anonymization is the process of transforming these art
Externí odkaz:
http://arxiv.org/abs/2002.07798
Facial verification systems are vulnerable to poisoning attacks that make use of multiple-identity images (MIIs)---face images stored in a database that resemble multiple persons, such that novel images of any of the constituent persons are verified
Externí odkaz:
http://arxiv.org/abs/1906.08507