Zobrazeno 1 - 2
of 2
pro vyhledávání: '"Carlsson, Marcel"'
Jailbreak vulnerabilities in Large Language Models (LLMs), which exploit meticulously crafted prompts to elicit content that violates service guidelines, have captured the attention of research communities. While model owners can defend against indiv
Externí odkaz:
http://arxiv.org/abs/2309.05274
Autor:
Harris, Ian G., Alrahem, Thoulfekar, Chen, Alex, DiGiuseppe, Nick, Gee, Jefferey, Shang-Pin Hsiao, Mattox, Sean, Park, Taejoon, Selvaraj, Saravanan, Tam, Albert, Carlsson, Marcel
Publikováno v:
ISeCure; Jul2009, Vol. 1 Issue 2, p91-103, 13p