Zobrazeno 1 - 10
of 131
pro vyhledávání: '"Gaur, Manas"'
Large Language Models (LLMs) are prone to inheriting and amplifying societal biases embedded within their training data, potentially reinforcing harmful stereotypes related to gender, occupation, and other sensitive categories. This issue becomes par
Externí odkaz:
http://arxiv.org/abs/2408.11247
Previous research on testing the vulnerabilities in Large Language Models (LLMs) using adversarial attacks has primarily focused on nonsensical prompt injections, which are easily detected upon manual or automated review (e.g., via byte entropy). How
Externí odkaz:
http://arxiv.org/abs/2407.14644
Sustainable Development Goals (SDGs) give the UN a road map for development with Agenda 2030 as a target. SDG3 "Good Health and Well-Being" ensures healthy lives and promotes well-being for all ages. Digital technologies can support SDG3. Burnout and
Externí odkaz:
http://arxiv.org/abs/2406.13791
Autor:
Mohammadi, Seyedali, Raff, Edward, Malekar, Jinendra, Palit, Vedant, Ferraro, Francis, Gaur, Manas
Language Models (LMs) are being proposed for mental health applications where the heightened risk of adverse outcomes means predictive performance may not be a sufficient litmus test of a model's utility in clinical practice. A model that can be trus
Externí odkaz:
http://arxiv.org/abs/2406.12058
Autor:
Tilwani, Deepa, Saxena, Yash, Mohammadi, Ali, Raff, Edward, Sheth, Amit, Parthasarathy, Srinivasan, Gaur, Manas
Automatic citation generation for sentences in a document or report is paramount for intelligence analysts, cybersecurity, news agencies, and education personnel. In this research, we investigate whether large language models (LLMs) are capable of ge
Externí odkaz:
http://arxiv.org/abs/2405.02228
Autor:
Govil, Priyanshul, Jain, Hemang, Bonagiri, Vamshi Krishna, Chadha, Aman, Kumaraguru, Ponnurangam, Gaur, Manas, Dey, Sanorita
Large Language Models (LLMs) often inherit biases from the web data they are trained on, which contains stereotypes and prejudices. Current methods for evaluating and mitigating these biases rely on bias-benchmark datasets. These benchmarks measure b
Externí odkaz:
http://arxiv.org/abs/2402.14889
Autor:
Bonagiri, Vamshi Krishna, Vennam, Sreeram, Govil, Priyanshul, Kumaraguru, Ponnurangam, Gaur, Manas
Despite recent advancements showcasing the impressive capabilities of Large Language Models (LLMs) in conversational systems, we show that even state-of-the-art LLMs are morally inconsistent in their generations, questioning their reliability (and tr
Externí odkaz:
http://arxiv.org/abs/2402.13709
A Large Language Model (LLM) is considered consistent if semantically equivalent prompts produce semantically equivalent responses. Despite recent advancements showcasing the impressive capabilities of LLMs in conversational systems, we show that eve
Externí odkaz:
http://arxiv.org/abs/2402.01719
Autor:
Raj, Kanak, Roy, Kaushik, Bonagiri, Vamshi, Govil, Priyanshul, Thirunarayanan, Krishnaprasad, Gaur, Manas
Personalizing conversational agents can enhance the quality of conversations and increase user engagement. However, they often lack external knowledge to appropriately tend to a user's persona. This is particularly crucial for practical applications
Externí odkaz:
http://arxiv.org/abs/2312.17748
Autor:
Gaur, Manas, Sheth, Amit
Explainability and Safety engender Trust. These require a model to exhibit consistency and reliability. To achieve these, it is necessary to use and analyze data and knowledge with statistical and symbolic AI methods relevant to the AI application -
Externí odkaz:
http://arxiv.org/abs/2312.06798