Zobrazeno 1 - 10
of 107
pro vyhledávání: '"Lee, Roy Ka Wei"'
Fairness in both machine learning (ML) predictions and human decisions is critical, with ML models prone to algorithmic and data bias, and human decisions affected by subjectivity and cognitive bias. This study investigates fairness using a real-worl
Externí odkaz:
http://arxiv.org/abs/2411.17374
How objective and unbiased are we while making decisions? This work investigates cognitive bias identification in high-stake decision making process by human experts, questioning its effectiveness in real-world settings, such as candidates assessment
Externí odkaz:
http://arxiv.org/abs/2411.08504
The widespread presence of hate speech on the internet, including formats such as text-based tweets and vision-language memes, poses a significant challenge to digital platform safety. Recent research has developed detection models tailored to specif
Externí odkaz:
http://arxiv.org/abs/2410.05600
In evaluating the long-context capabilities of large language models (LLMs), benchmarks such as "Needle-in-a-Haystack" (NIAH), Ruler, and Needlebench are commonly used. While these benchmarks measure how well models understand long-context input sequ
Externí odkaz:
http://arxiv.org/abs/2409.02076
Hate speech is a pressing issue in modern society, with significant effects both online and offline. Recent research in hate speech detection has primarily centered on text-based media, largely overlooking multimodal content such as videos. Existing
Externí odkaz:
http://arxiv.org/abs/2408.03468
Large Language Models (LLMs) have demonstrated remarkable capabilities in executing tasks based on natural language queries. However, these models, trained on curated datasets, inherently embody biases ranging from racial to national and gender biase
Externí odkaz:
http://arxiv.org/abs/2407.17688
Large Language Models (LLMs) have demonstrated remarkable proficiency in a wide range of NLP tasks. However, when it comes to authorship verification (AV) tasks, which involve determining whether two given texts share the same authorship, even advanc
Externí odkaz:
http://arxiv.org/abs/2407.12882
Autor:
Shi, Wenhao, Hu, Zhiqiang, Bin, Yi, Liu, Junhua, Yang, Yang, Ng, See-Kiong, Bing, Lidong, Lee, Roy Ka-Wei
Large language models (LLMs) have demonstrated impressive reasoning capabilities, particularly in textual mathematical problem-solving. However, existing open-source image instruction fine-tuning datasets, containing limited question-answer pairs per
Externí odkaz:
http://arxiv.org/abs/2406.17294
Detecting hate speech and offensive language is essential for maintaining a safe and respectful digital environment. This study examines the limitations of state-of-the-art large language models (LLMs) in identifying offensive content within systemat
Externí odkaz:
http://arxiv.org/abs/2406.12223
Autor:
Ligo, Val Alvern Cueco, Cheung, Lam Yin, Lee, Roy Ka-Wei, Saha, Koustuv, Tandoc Jr., Edson C., Kumar, Navin
Social media platforms, particularly Telegram, play a pivotal role in shaping public perceptions and opinions on global and national issues. Unlike traditional news media, Telegram allows for the proliferation of user-generated content with minimal o
Externí odkaz:
http://arxiv.org/abs/2406.06717