Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Ghuge, Shardul"'
Autor:
Raza, Shaina, Bamgbose, Oluwanifemi, Ghuge, Shardul, Tavakol, Fatemeh, Reji, Deepak John, Bashir, Syed Raza
Large Language Models (LLMs) have advanced various Natural Language Processing (NLP) tasks, such as text generation and translation, among others. However, these models often generate text that can perpetuate biases. Existing approaches to mitigate t
Externí odkaz:
http://arxiv.org/abs/2404.01399
The rapid evolution of Large Language Models (LLMs) highlights the necessity for ethical considerations and data integrity in AI development, particularly emphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable) data principles. W
Externí odkaz:
http://arxiv.org/abs/2401.11033
Despite increasing awareness and research around fake news, there is still a significant need for datasets that specifically target racial slurs and biases within North American political speeches. This is particulary important in the context of upco
Externí odkaz:
http://arxiv.org/abs/2312.03750
Autor:
Raza, Shaina, Bamgbose, Oluwanifemi, Chatrath, Veronica, Ghuge, Shardul, Sidyakin, Yan, Muaad, Abdullah Y
Bias detection in text is crucial for combating the spread of negative stereotypes, misinformation, and biased decision-making. Traditional language models frequently face challenges in generalizing beyond their training data and are typically design
Externí odkaz:
http://arxiv.org/abs/2310.00347