Obfuscated Activations Bypass LLM Latent-Space Defenses

Autor: Bailey, Luke, Serrano, Alex, Sheshadri, Abhay, Seleznyov, Mikhail, Taylor, Jordan, Jenner, Erik, Hilton, Jacob, Casper, Stephen, Guestrin, Carlos, Emmons, Scott
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Recent latent-space monitoring techniques have shown promise as defenses against LLM attacks. These defenses act as scanners that seek to detect harmful activations before they lead to undesirable actions. This prompts the question: Can models execute harmful behavior via inconspicuous latent states? Here, we study such obfuscated activations. We show that state-of-the-art latent-space defenses -- including sparse autoencoders, representation probing, and latent OOD detection -- are all vulnerable to obfuscated activations. For example, against probes trained to classify harmfulness, our attacks can often reduce recall from 100% to 0% while retaining a 90% jailbreaking rate. However, obfuscation has limits: we find that on a complex task (writing SQL code), obfuscation reduces model performance. Together, our results demonstrate that neural activations are highly malleable: we can reshape activation patterns in a variety of ways, often while preserving a network's behavior. This poses a fundamental challenge to latent-space defenses.
Comment: Project page: https://obfuscated-activations.github.io/
Databáze: arXiv