Zobrazeno 1 - 10
of 14
pro vyhledávání: '"Hada, Rishav"'
Autor:
Hada, Rishav, Husain, Safiya, Gumma, Varun, Diddee, Harshita, Yadavalli, Aditya, Seth, Agrima, Kulkarni, Nidhi, Gadiraju, Ujwal, Vashistha, Aditya, Seshadri, Vivek, Bali, Kalika
Existing research in measuring and mitigating gender bias predominantly centers on English, overlooking the intricate challenges posed by non-English languages and the Global South. This paper presents the first comprehensive study delving into the n
Externí odkaz:
http://arxiv.org/abs/2405.06346
With the rising human-like precision of Large Language Models (LLMs) in numerous tasks, their utilization in a variety of real-world applications is becoming more prevalent. Several studies have shown that LLMs excel on many standard NLP benchmarks.
Externí odkaz:
http://arxiv.org/abs/2404.01667
Autor:
Gumma, Varun, Hada, Rishav, Yadavalli, Aditya, Gogoi, Pamir, Mondal, Ishani, Seshadri, Vivek, Bali, Kalika
We present MunTTS, an end-to-end text-to-speech (TTS) system specifically for Mundari, a low-resource Indian language of the Austo-Asiatic family. Our work addresses the gap in linguistic technology for underrepresented languages by collecting and pr
Externí odkaz:
http://arxiv.org/abs/2401.15579
Autor:
Ahuja, Sanchit, Aggarwal, Divyanshu, Gumma, Varun, Watts, Ishaan, Sathe, Ashutosh, Ochieng, Millicent, Hada, Rishav, Jain, Prachi, Axmed, Maxamed, Bali, Kalika, Sitaram, Sunayana
There has been a surge in LLM evaluation research to understand LLM capabilities and limitations. However, much of this research has been confined to English, leaving LLM building and evaluation for non-English languages relatively unexplored. Severa
Externí odkaz:
http://arxiv.org/abs/2311.07463
Language serves as a powerful tool for the manifestation of societal belief systems. In doing so, it also perpetuates the prevalent biases in our society. Gender bias is one of the most pervasive biases in our society and is seen in online and offlin
Externí odkaz:
http://arxiv.org/abs/2310.17428
Autor:
Hada, Rishav, Gumma, Varun, de Wynter, Adrian, Diddee, Harshita, Ahmed, Mohamed, Choudhury, Monojit, Bali, Kalika, Sitaram, Sunayana
Large Language Models (LLMs) excel in various Natural Language Processing (NLP) tasks, yet their evaluation, particularly in languages beyond the top $20$, remains inadequate due to existing benchmarks and metrics limitations. Employing LLMs as evalu
Externí odkaz:
http://arxiv.org/abs/2309.07462
Autor:
Ahuja, Kabir, Diddee, Harshita, Hada, Rishav, Ochieng, Millicent, Ramesh, Krithika, Jain, Prachi, Nambi, Akshay, Ganu, Tanuja, Segal, Sameer, Axmed, Maxamed, Bali, Kalika, Sitaram, Sunayana
Generative AI models have shown impressive performance on many Natural Language Processing tasks such as language understanding, reasoning, and language generation. An important question being asked by the AI community today is about the capabilities
Externí odkaz:
http://arxiv.org/abs/2303.12528
Autor:
Hada, Rishav, Fard, Amir Ebrahimi, Shugars, Sarah, Bianchi, Federico, Rossini, Patricia, Hovy, Dirk, Tromble, Rebekah, Tintarev, Nava
Increasingly taking place in online spaces, modern political conversations are typically perceived to be unproductively affirming -- siloed in so called ``echo chambers'' of exclusively like-minded discussants. Yet, to date we lack sufficient means t
Externí odkaz:
http://arxiv.org/abs/2212.09056
Autor:
Hada, Rishav, Sudhir, Sohi, Mishra, Pushkar, Yannakoudakis, Helen, Mohammad, Saif M., Shutova, Ekaterina
On social media platforms, hateful and offensive language negatively impact the mental well-being of users and the participation of people from diverse backgrounds. Automatic methods to detect offensive language have largely relied on datasets with c
Externí odkaz:
http://arxiv.org/abs/2106.05664
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.