Zobrazeno 1 - 10
of 46
pro vyhledávání: '"Masud, Sarah"'
Large Language Models (LLMs) have demonstrated strong performance as knowledge repositories, enabling models to understand user queries and generate accurate and context-aware responses. Extensive evaluation setups have corroborated the positive corr
Externí odkaz:
http://arxiv.org/abs/2411.10813
For subjective tasks such as hate detection, where people perceive hate differently, the Large Language Model's (LLM) ability to represent diverse groups is unclear. By including additional context in prompts, we comprehensively analyze LLM's sensiti
Externí odkaz:
http://arxiv.org/abs/2410.02657
Deaf and Hard-of-Hearing (DHH) learners face unique challenges in video-based learning due to the complex interplay between visual and auditory information in videos. Traditional approaches to making video content accessible primarily focus on captio
Externí odkaz:
http://arxiv.org/abs/2410.00196
Independent fact-checking organizations have emerged as the crusaders to debunk fake news. However, they may not always remain neutral, as they can be selective in the false news they choose to expose and in how they present the information. They can
Externí odkaz:
http://arxiv.org/abs/2407.19498
Employing language models to generate explanations for an incoming implicit hate post is an active area of research. The explanation is intended to make explicit the underlying stereotype and aid content moderators. The training often combines top-k
Externí odkaz:
http://arxiv.org/abs/2406.03953
Despite the widespread adoption, there is a lack of research into how various critical aspects of pretrained language models (PLMs) affect their performance in hate speech detection. Through five research questions, our findings and recommendations l
Externí odkaz:
http://arxiv.org/abs/2402.02144
As hate speech continues to proliferate on the web, it is becoming increasingly important to develop computational methods to mitigate it. Reactively, using black-box models to identify hateful content can perplex users as to why their posts were aut
Externí odkaz:
http://arxiv.org/abs/2311.09834
Focal Inferential Infusion Coupled with Tractable Density Discrimination for Implicit Hate Detection
Although pretrained large language models (PLMs) have achieved state-of-the-art on many natural language processing (NLP) tasks, they lack an understanding of subtle expressions of implicit hate speech. Various attempts have been made to enhance the
Externí odkaz:
http://arxiv.org/abs/2309.11896
Autor:
Chakraborty, Tanmoy1 (AUTHOR) chak.tanmoy.iit@gmail.com, Masud, Sarah2 (AUTHOR) sarahm@iiitd.ac.in
Publikováno v:
Communications of the ACM. Oct2024, Vol. 67 Issue 10, p26-28. 3p.
Social media is awash with hateful content, much of which is often veiled with linguistic and topical diversity. The benchmark datasets used for hate speech detection do not account for such divagation as they are predominantly compiled using hate le
Externí odkaz:
http://arxiv.org/abs/2306.01105