Zobrazeno 1 - 10
of 6 596
pro vyhledávání: '"bias mitigation"'
With fairness concerns gaining significant attention in Machine Learning (ML), several bias mitigation techniques have been proposed, often compared against each other to find the best method. These benchmarking efforts tend to use a common setup for
Externí odkaz:
http://arxiv.org/abs/2411.11101
Autor:
Hickman, Louis, Huynh, Christopher, Gass, Jessica, Booth, Brandon, Kuruzovich, Jason, Tay, Louis
Machine learning (ML) models are increasingly used for personnel assessment and selection (e.g., resume screeners, automatically scored interviews). However, concerns have been raised throughout society that ML assessments may be biased and perpetuat
Externí odkaz:
http://arxiv.org/abs/2410.19003
Autor:
Joy, Sajib Kumar Saha, Mahy, Arman Hassan, Sultana, Meherin, Abha, Azizah Mamun, Ahmmed, MD Piyal, Dong, Yue, Shahariar, G M
In this study, we investigate gender bias in Bangla pretrained language models, a largely under explored area in low-resource languages. To assess this bias, we applied gender-name swapping techniques to existing datasets, creating four manually anno
Externí odkaz:
http://arxiv.org/abs/2411.10636
As one of the most successful generative models, diffusion models have demonstrated remarkable efficacy in synthesizing high-quality images. These models learn the underlying high-dimensional data distribution in an unsupervised manner. Despite their
Externí odkaz:
http://arxiv.org/abs/2412.08480
Although large language models (LLMs) have demonstrated their effectiveness in a wide range of applications, they have also been observed to perpetuate unwanted biases present in the training data, potentially leading to harm for marginalized communi
Externí odkaz:
http://arxiv.org/abs/2412.01711
Simplicity bias poses a significant challenge in neural networks, often leading models to favor simpler solutions and inadvertently learn decision rules influenced by spurious correlations. This results in biased models with diminished generalizabili
Externí odkaz:
http://arxiv.org/abs/2411.00711
Efforts to mitigate bias and enhance fairness in the artificial intelligence (AI) community have predominantly focused on technical solutions. While numerous reviews have addressed bias in AI, this review uniquely focuses on the practical limitations
Externí odkaz:
http://arxiv.org/abs/2410.17433
Recent advances in parameter-efficient fine-tuning methods, such as Low Rank Adaptation (LoRA), have gained significant attention for their ability to efficiently adapt large foundational models to various downstream tasks. These methods are apprecia
Externí odkaz:
http://arxiv.org/abs/2410.17358
Autor:
Zarlenga, Mateo Espinosa, Sankaranarayanan, Swami, Andrews, Jerone T. A., Shams, Zohreh, Jamnik, Mateja, Xiang, Alice
Deep neural networks trained via empirical risk minimisation often exhibit significant performance disparities across groups, particularly when group and task labels are spuriously correlated (e.g., "grassy background" and "cows"). Existing bias miti
Externí odkaz:
http://arxiv.org/abs/2409.17691
Autor:
Roy, Amartya, Khanna, Danush, Mahapatra, Devanshu, Vasanthakumar, Das, Avirup, Ghosh, Kripabandhu
This paper tackles the challenge of building robust and generalizable bias mitigation models for language. Recognizing the limitations of existing datasets, we introduce ANUBIS, a novel dataset with 1507 carefully curated sentence pairs encompassing
Externí odkaz:
http://arxiv.org/abs/2409.16371