Autor: |
Li, Maximilian, Davies, Xander, Nadeau, Max |
Rok vydání: |
2023 |
Předmět: |
|
Zdroj: |
Workshop on Challenges in Deployable Generative AI at International Conference on Machine Learning (ICML), Honolulu, Hawaii, USA. 2023 |
Druh dokumentu: |
Working Paper |
Popis: |
Language models often exhibit behaviors that improve performance on a pre-training objective but harm performance on downstream tasks. We propose a novel approach to removing undesirable behaviors by ablating a small number of causal pathways between model components, with the intention of disabling the computational circuit responsible for the bad behavior. Given a small dataset of inputs where the model behaves poorly, we learn to ablate a small number of important causal pathways. In the setting of reducing GPT-2 toxic language generation, we find ablating just 12 of the 11.6K causal edges mitigates toxic generation with minimal degradation of performance on other inputs. |
Databáze: |
arXiv |
Externí odkaz: |
|