Zobrazeno 1 - 10
of 323
pro vyhledávání: '"Nakashima, Yuta"'
The rapid development of text-to-image generation has brought rising ethical considerations, especially regarding gender bias. Given a text prompt as input, text-to-image models generate images according to the prompt. Pioneering models such as Stabl
Externí odkaz:
http://arxiv.org/abs/2408.11358
Large Language Models (LLMs) have demonstrated significant capabilities, particularly in the domain of question answering (QA). However, their effectiveness in QA is often undermined by the vagueness of user questions. To address this issue, we intro
Externí odkaz:
http://arxiv.org/abs/2408.10573
Autor:
Hirota, Yusuke, Chen, Min-Hung, Wang, Chien-Yi, Nakashima, Yuta, Wang, Yu-Chiang Frank, Hachiuma, Ryo
Large-scale vision-language models, such as CLIP, are known to contain harmful societal bias regarding protected attributes (e.g., gender and age). In this paper, we aim to address the problems of societal bias in CLIP. Although previous studies have
Externí odkaz:
http://arxiv.org/abs/2408.10202
Autor:
Wang, Bowen, Chang, Jiuyang, Qian, Yiming, Chen, Guoxin, Chen, Junhao, Jiang, Zhouqiang, Zhang, Jiahao, Nakashima, Yuta, Nagahara, Hajime
Large language models (LLMs) have recently showcased remarkable capabilities, spanning a wide range of tasks and applications, including those in the medical domain. Models like GPT-4 excel in medical question answering but may face challenges in the
Externí odkaz:
http://arxiv.org/abs/2408.01933
The imperative to comprehend the behaviors of deep learning models is of utmost importance. In this realm, Explainable Artificial Intelligence (XAI) has emerged as a promising avenue, garnering increasing interest in recent years. Despite this, most
Externí odkaz:
http://arxiv.org/abs/2407.05616
Autor:
Hirota, Yusuke, Andrews, Jerone T. A., Zhao, Dora, Papakyriakopoulos, Orestis, Modas, Apostolos, Nakashima, Yuta, Xiang, Alice
We tackle societal bias in image-text datasets by removing spurious correlations between protected groups and image attributes. Traditional methods only target labeled attributes, ignoring biases from unlabeled ones. Using text-guided inpainting mode
Externí odkaz:
http://arxiv.org/abs/2407.03623
Large language models (LLMs) have enhanced the capacity of vision-language models to caption visual text. This generative approach to image caption enrichment further makes textual captions more descriptive, improving alignment with the visual contex
Externí odkaz:
http://arxiv.org/abs/2406.13912
Fake news detection in social media has become increasingly important due to the rapid proliferation of personal media channels and the consequential dissemination of misleading information. Existing methods, which primarily rely on multimodal featur
Externí odkaz:
http://arxiv.org/abs/2406.09884
We investigate the impact of deep generative models on potential social biases in upcoming computer vision models. As the internet witnesses an increasing influx of AI-generated images, concerns arise regarding inherent biases that may accompany them
Externí odkaz:
http://arxiv.org/abs/2404.03242
Several studies have raised awareness about social biases in image generative models, demonstrating their predisposition towards stereotypes and imbalances. This paper contributes to this growing body of research by introducing an evaluation protocol
Externí odkaz:
http://arxiv.org/abs/2312.03027