Zobrazeno 1 - 10
of 566
pro vyhledávání: '"A. Jaidka"'
Autor:
Verma, Preetika, Jaidka, Kokil
In this paper, we introduce the MediaSpin dataset aiming to help in the development of models that can detect different forms of media bias present in news headlines, developed through human-supervised and -validated Large Language Model (LLM) labeli
Externí odkaz:
http://arxiv.org/abs/2412.02271
Autor:
Churina, Svetlana, Jaidka, Kokil
The incivility in social media discourse complicates the deployment of automated text generation models for politically sensitive content. Fine-tuning and prompting strategies are critical, but underexplored, solutions to mitigate toxicity in such co
Externí odkaz:
http://arxiv.org/abs/2411.16813
The prevalence of multi-modal content on social media complicates automated moderation strategies. This calls for an enhancement in multi-modal classification and a deeper understanding of understated meanings in images and memes. Although previous e
Externí odkaz:
http://arxiv.org/abs/2411.10480
To be included into chatbot systems, Large language models (LLMs) must be aligned with human conversational conventions. However, being trained mainly on web-scraped data gives existing LLMs a voice closer to informational text than actual human spee
Externí odkaz:
http://arxiv.org/abs/2407.19526
Autor:
Furniturewala, Shaz, Jaidka, Kokil
For the WASSA 2024 Empathy and Personality Prediction Shared Task, we propose a novel turn-level empathy detection method that decomposes empathy into six psychological indicators: Emotional Language, Perspective-Taking, Sympathy and Compassion, Extr
Externí odkaz:
http://arxiv.org/abs/2407.08607
Supervised machine-learning models for predicting user behavior offer a challenging classification problem with lower average prediction performance scores than other text classification tasks. This study evaluates multi-task learning frameworks grou
Externí odkaz:
http://arxiv.org/abs/2407.08182
Autor:
Furniturewala, Shaz, Jandial, Surgan, Java, Abhinav, Banerjee, Pragyan, Shahid, Simra, Bhatia, Sumit, Jaidka, Kokil
Existing debiasing techniques are typically training-based or require access to the model's internals and output distributions, so they are inaccessible to end-users looking to adapt LLM outputs for their particular needs. In this study, we examine w
Externí odkaz:
http://arxiv.org/abs/2405.10431
Autor:
Tan, Fiona Anting, Yeo, Gerard Christopher, Jaidka, Kokil, Wu, Fanyou, Xu, Weijie, Jain, Vinija, Chadha, Aman, Liu, Yang, Ng, See-Kiong
The use of LLMs in natural language reasoning has shown mixed results, sometimes rivaling or even surpassing human performance in simpler classification tasks while struggling with social-cognitive reasoning, a domain where humans naturally excel. Th
Externí odkaz:
http://arxiv.org/abs/2403.02246
We audited large language models (LLMs) for their ability to create evidence-based and stylistic counter-arguments to posts from the Reddit ChangeMyView dataset. We benchmarked their rhetorical quality across a host of qualitative and quantitative me
Externí odkaz:
http://arxiv.org/abs/2402.08498
Online games are dynamic environments where players interact with each other, which offers a rich setting for understanding how players negotiate their way through the game to an ultimate victory. This work studies online player interactions during t
Externí odkaz:
http://arxiv.org/abs/2311.08666