Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Venkit, Pranav Narayanan"'
Our research investigates the impact of Generative Artificial Intelligence (GAI) models, specifically text-to-image generators (T2Is), on the representation of non-Western cultures, with a focus on Indian contexts. Despite the transformative potentia
Externí odkaz:
http://arxiv.org/abs/2407.14779
Autor:
Venkit, Pranav Narayanan, Graziul, Christopher, Goodman, Miranda Ardith, Kenny, Samantha Nicole, Wilson, Shomir
Radios are essential for the operations of modern police departments, and they function as both a collaborative communication technology and a sociotechnical system. However, little prior research has examined their usage or their connections to indi
Externí odkaz:
http://arxiv.org/abs/2407.01817
Autor:
Du, Jiangshu, Wang, Yibo, Zhao, Wenting, Deng, Zhongfen, Liu, Shuaiqi, Lou, Renze, Zou, Henry Peng, Venkit, Pranav Narayanan, Zhang, Nan, Srinath, Mukund, Zhang, Haoran Ranran, Gupta, Vipul, Li, Yinghui, Li, Tao, Wang, Fei, Liu, Qin, Liu, Tianlin, Gao, Pengzhi, Xia, Congying, Xing, Chen, Cheng, Jiayang, Wang, Zhaowei, Su, Ying, Shah, Raj Sanjay, Guo, Ruohao, Gu, Jing, Li, Haoran, Wei, Kangda, Wang, Zihao, Cheng, Lu, Ranathunga, Surangika, Fang, Meng, Fu, Jie, Liu, Fei, Huang, Ruihong, Blanco, Eduardo, Cao, Yixin, Zhang, Rui, Yu, Philip S., Yin, Wenpeng
This work is motivated by two key trends. On one hand, large language models (LLMs) have shown remarkable versatility in various generative tasks such as writing, drawing, and question answering, significantly reducing the time required for many rout
Externí odkaz:
http://arxiv.org/abs/2406.16253
As social media has become a predominant mode of communication globally, the rise of abusive content threatens to undermine civil discourse. Recognizing the critical nature of this issue, a significant body of research has been dedicated to developin
Externí odkaz:
http://arxiv.org/abs/2405.11030
Autor:
Venkit, Pranav Narayanan, Chakravorti, Tatiana, Gupta, Vipul, Biggs, Heidi, Srinath, Mukund, Goswami, Koustava, Rajtmajer, Sarah, Wilson, Shomir
We audit how hallucination in large language models (LLMs) is characterized in peer-reviewed literature, using a critical examination of 103 publications across NLP research. Through the examination of the literature, we identify a lack of agreement
Externí odkaz:
http://arxiv.org/abs/2404.07461
With the widespread adoption of advanced generative models such as Gemini and GPT, there has been a notable increase in the incorporation of such models into sociotechnical systems, categorized under AI-as-a-Service (AIaaS). Despite their versatility
Externí odkaz:
http://arxiv.org/abs/2403.10776
Autor:
Venkit, Pranav Narayanan, Srinath, Mukund, Gautam, Sanjana, Venkatraman, Saranya, Gupta, Vipul, Passonneau, Rebecca J., Wilson, Shomir
We conduct an inquiry into the sociotechnical aspects of sentiment analysis (SA) by critically examining 189 peer-reviewed papers on their applications, models, and datasets. Our investigation stems from the recognition that SA has become an integral
Externí odkaz:
http://arxiv.org/abs/2310.12318
Autor:
Venkit, Pranav Narayanan
The rapid growth in the usage and applications of Natural Language Processing (NLP) in various sociotechnical solutions has highlighted the need for a comprehensive understanding of bias and its impact on society. While research on bias in NLP has ex
Externí odkaz:
http://arxiv.org/abs/2308.13089
Autor:
Gupta, Vipul, Venkit, Pranav Narayanan, Laurençon, Hugo, Wilson, Shomir, Passonneau, Rebecca J.
As language models (LMs) become increasingly powerful and widely used, it is important to quantify them for sociodemographic bias with potential for harm. Prior measures of bias are sensitive to perturbations in the templates designed to compare perf
Externí odkaz:
http://arxiv.org/abs/2308.12539
Autor:
Venkit, Pranav Narayanan, Gautam, Sanjana, Panchanadikar, Ruchi, Huang, Ting-Hao `Kenneth', Wilson, Shomir
We investigate the potential for nationality biases in natural language processing (NLP) models using human evaluation methods. Biased NLP models can perpetuate stereotypes and lead to algorithmic discrimination, posing a significant challenge to the
Externí odkaz:
http://arxiv.org/abs/2308.04346