Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Bonagiri, Vamshi"'
Autor:
Kodali, Prashant, Goel, Anmol, Asapu, Likhith, Bonagiri, Vamshi Krishna, Govil, Anirudh, Choudhury, Monojit, Shrivastava, Manish, Kumaraguru, Ponnurangam
Current computational approaches for analysing or generating code-mixed sentences do not explicitly model "naturalness" or "acceptability" of code-mixed sentences, but rely on training corpora to reflect distribution of acceptable code-mixed sentence
Externí odkaz:
http://arxiv.org/abs/2405.05572
Autor:
Govil, Priyanshul, Jain, Hemang, Bonagiri, Vamshi Krishna, Chadha, Aman, Kumaraguru, Ponnurangam, Gaur, Manas, Dey, Sanorita
Large Language Models (LLMs) often inherit biases from the web data they are trained on, which contains stereotypes and prejudices. Current methods for evaluating and mitigating these biases rely on bias-benchmark datasets. These benchmarks measure b
Externí odkaz:
http://arxiv.org/abs/2402.14889
Autor:
Bonagiri, Vamshi Krishna, Vennam, Sreeram, Govil, Priyanshul, Kumaraguru, Ponnurangam, Gaur, Manas
Despite recent advancements showcasing the impressive capabilities of Large Language Models (LLMs) in conversational systems, we show that even state-of-the-art LLMs are morally inconsistent in their generations, questioning their reliability (and tr
Externí odkaz:
http://arxiv.org/abs/2402.13709
A Large Language Model (LLM) is considered consistent if semantically equivalent prompts produce semantically equivalent responses. Despite recent advancements showcasing the impressive capabilities of LLMs in conversational systems, we show that eve
Externí odkaz:
http://arxiv.org/abs/2402.01719
Autor:
Raj, Kanak, Roy, Kaushik, Bonagiri, Vamshi, Govil, Priyanshul, Thirunarayanan, Krishnaprasad, Gaur, Manas
Personalizing conversational agents can enhance the quality of conversations and increase user engagement. However, they often lack external knowledge to appropriately tend to a user's persona. This is particularly crucial for practical applications
Externí odkaz:
http://arxiv.org/abs/2312.17748
Autor:
Agarwal, Anmol, Gupta, Shrey, Bonagiri, Vamshi, Gaur, Manas, Reagle, Joseph, Kumaraguru, Ponnurangam
Publikováno v:
45th European Conference on Information Retrieval, ECIR 2023
Information Disguise (ID), a part of computational ethics in Natural Language Processing (NLP), is concerned with best practices of textual paraphrasing to prevent the non-consensual use of authors' posts on the Internet. Research on ID becomes impor
Externí odkaz:
http://arxiv.org/abs/2311.05018
Autor:
Gamage, Dilrukshi, Ghasiya, Piyush, Bonagiri, Vamshi Krishna, Whiting, Mark E, Sasahara, Kazutoshi
Deepfakes are synthetic content generated using advanced deep learning and AI technologies. The advancement of technology has created opportunities for anyone to create and share deepfakes much easier. This may lead to societal concerns based on how
Externí odkaz:
http://arxiv.org/abs/2203.15044
Autor:
Jaap Kamps, Lorraine Goeuriot, Fabio Crestani, Maria Maistro, Hideo Joho, Brian Davis, Cathal Gurrin, Udo Kruschwitz, Annalina Caputo
The three-volume set LNCS 13980, 13981 and 13982 constitutes the refereed proceedings of the 45th European Conference on IR Research, ECIR 2023, held in Dublin, Ireland, during April 2-6, 2023. The 65 full papers, 41 short papers, 19 demonstration p