Zobrazeno 1 - 10
of 54 238
pro vyhledávání: '"Padhi, A."'
Autor:
Gopal Mahapatra, Mousumi Padhi
Publikováno v:
NHRD Network Journal. 16:196-202
Autor:
Padhi, Inkit, Nagireddy, Manish, Cornacchia, Giandomenico, Chaudhury, Subhajit, Pedapati, Tejaswini, Dognin, Pierre, Murugesan, Keerthiram, Miehling, Erik, Cooper, Martín Santillán, Fraser, Kieran, Zizzo, Giulio, Hameed, Muhammad Zaid, Purcell, Mark, Desmond, Michael, Pan, Qian, Ashktorab, Zahra, Vejsbjerg, Inge, Daly, Elizabeth M., Hind, Michael, Geyer, Werner, Rawat, Ambrish, Varshney, Kush R., Sattigeri, Prasanna
We introduce the Granite Guardian models, a suite of safeguards designed to provide risk detection for prompts and responses, enabling safe and responsible use in combination with any large language model (LLM). These models offer comprehensive cover
Externí odkaz:
http://arxiv.org/abs/2412.07724
The spectra of particles in disordered lattices can either be completely extended or localized or can be intermediate which hosts both the localized and extended states separated from each other. In this work, however, we show that in the case of a o
Externí odkaz:
http://arxiv.org/abs/2412.04344
Autor:
Wei, Dennis, Padhi, Inkit, Ghosh, Soumya, Dhurandhar, Amit, Ramamurthy, Karthikeyan Natesan, Chang, Maria
Training data attribution (TDA) is the task of attributing model behavior to elements in the training data. This paper draws attention to the common setting where one has access only to the final trained model, and not the training algorithm or inter
Externí odkaz:
http://arxiv.org/abs/2412.03906
Toxicity identification in online multimodal environments remains a challenging task due to the complexity of contextual connections across modalities (e.g., textual and visual). In this paper, we propose a novel framework that integrates Knowledge D
Externí odkaz:
http://arxiv.org/abs/2411.12174
Autor:
Lee, Bruce W., Padhi, Inkit, Ramamurthy, Karthikeyan Natesan, Miehling, Erik, Dognin, Pierre, Nagireddy, Manish, Dhurandhar, Amit
LLMs have shown remarkable capabilities, but precisely controlling their response behavior remains challenging. Existing activation steering methods alter LLM behavior indiscriminately, limiting their practical applicability in settings where selecti
Externí odkaz:
http://arxiv.org/abs/2409.05907
Autor:
Padhi, Inkit, Ramamurthy, Karthikeyan Natesan, Sattigeri, Prasanna, Nagireddy, Manish, Dognin, Pierre, Varshney, Kush R.
Aligning large language models (LLMs) to value systems has emerged as a significant area of research within the fields of AI and NLP. Currently, this alignment process relies on the availability of high-quality supervised and preference data, which c
Externí odkaz:
http://arxiv.org/abs/2408.10392
Autor:
DAS, KESHAB
Publikováno v:
Economic and Political Weekly, 2010 May 01. 45(19), 29-31.
Externí odkaz:
https://www.jstor.org/stable/27806997
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Chakraborty, Souvik Lal1 (AUTHOR) souvik.chakraborty@monash.edu
Publikováno v:
Social Movement Studies. Sep2022, Vol. 21 Issue 5, p719-720. 2p.