Zobrazeno 1 - 10
of 1 130
pro vyhledávání: '"P, Vinodkumar"'
Autor:
Rastogi, Charvi, Teh, Tian Huey, Mishra, Pushkar, Patel, Roma, Ashwood, Zoe, Davani, Aida Mostafazadeh, Diaz, Mark, Paganini, Michela, Parrish, Alicia, Wang, Ding, Prabhakaran, Vinodkumar, Aroyo, Lora, Rieser, Verena
AI systems crucially rely on human ratings, but these ratings are often aggregated, obscuring the inherent diversity of perspectives in real-world phenomenon. This is particularly concerning when evaluating the safety of generative AI, where percepti
Externí odkaz:
http://arxiv.org/abs/2410.17032
The running coupling constant is calculated using the imaginary time formalism (ITF) of thermal field theory under the self-energy approximation. In the process, each Feynman diagram in thermal field theory is rewritten as the summation of non-therma
Externí odkaz:
http://arxiv.org/abs/2410.15300
Autor:
Kannen, Nithish, Ahmad, Arif, Andreetto, Marco, Prabhakaran, Vinodkumar, Prabhu, Utsav, Dieng, Adji Bousso, Bhattacharyya, Pushpak, Dave, Shachi
Text-to-Image (T2I) models are being increasingly adopted in diverse global communities where they create visual representations of their unique cultures. Current T2I benchmarks primarily focus on faithfulness, aesthetics, and realism of generated im
Externí odkaz:
http://arxiv.org/abs/2407.06863
Autor:
Chien, Jennifer, Bergman, A. Stevie, McKee, Kevin R., Tomasev, Nenad, Prabhakaran, Vinodkumar, Qadri, Rida, Marchal, Nahema, Isaac, William
Algorithmic fairness has emerged as a critical concern in artificial intelligence (AI) research. However, the development of fair AI systems is not an objective process. Fairness is an inherently subjective concept, shaped by the values, experiences,
Externí odkaz:
http://arxiv.org/abs/2407.16895
While human annotations play a crucial role in language technologies, annotator subjectivity has long been overlooked in data collection. Recent studies that have critically examined this issue are often situated in the Western context, and solely do
Externí odkaz:
http://arxiv.org/abs/2404.10857
Generative language models are transforming our digital ecosystem, but they often inherit societal biases, for instance stereotypes associating certain attributes with specific identity groups. While whether and how these biases are mitigated may dep
Externí odkaz:
http://arxiv.org/abs/2404.05866
While generative multilingual models are rapidly being deployed, their safety and fairness evaluations are largely limited to resources collected in English. This is especially problematic for evaluations targeting inherently socio-cultural phenomena
Externí odkaz:
http://arxiv.org/abs/2403.05696
Autor:
Jha, Akshita, Prabhakaran, Vinodkumar, Denton, Remi, Laszlo, Sarah, Dave, Shachi, Qadri, Rida, Reddy, Chandan K., Dev, Sunipa
Recent studies have shown that Text-to-Image (T2I) model generations can reflect social stereotypes present in the real world. However, existing approaches for evaluating stereotypes have a noticeable lack of coverage of global identity groups and th
Externí odkaz:
http://arxiv.org/abs/2401.06310
Perception of offensiveness is inherently subjective, shaped by the lived experiences and socio-cultural values of the perceivers. Recent years have seen substantial efforts to build AI-based tools that can detect offensive language at scale, as a me
Externí odkaz:
http://arxiv.org/abs/2312.06861
The unstructured nature of data used in foundation model development is a challenge to systematic analyses for making data use and documentation decisions. From a Responsible AI perspective, these decisions often rely upon understanding how people ar
Externí odkaz:
http://arxiv.org/abs/2311.17259