On Adversarial Robustness of Language Models in Transfer Learning

Autor: Turbal, Bohdan, Mazur, Anastasiia, Zhao, Jiaxu, Pechenizkiy, Mykola
Rok vydání: 2024
Předmět:
Zdroj: Socially Responsible Language Modelling Research (SoLaR) Workshop at NeurIPS 2024
Druh dokumentu: Working Paper
Popis: We investigate the adversarial robustness of LLMs in transfer learning scenarios. Through comprehensive experiments on multiple datasets (MBIB Hate Speech, MBIB Political Bias, MBIB Gender Bias) and various model architectures (BERT, RoBERTa, GPT-2, Gemma, Phi), we reveal that transfer learning, while improving standard performance metrics, often leads to increased vulnerability to adversarial attacks. Our findings demonstrate that larger models exhibit greater resilience to this phenomenon, suggesting a complex interplay between model size, architecture, and adaptation methods. Our work highlights the crucial need for considering adversarial robustness in transfer learning scenarios and provides insights into maintaining model security without compromising performance. These findings have significant implications for the development and deployment of LLMs in real-world applications where both performance and robustness are paramount.
Databáze: arXiv