Autor: |
Chiyu Shi, Junyu Su, Chiawei Chu, Baoping Wang, Duanyang Feng |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Mathematics, Vol 12, Iss 21, p 3359 (2024) |
Druh dokumentu: |
article |
ISSN: |
2227-7390 |
DOI: |
10.3390/math12213359 |
Popis: |
This paper tackles the critical issue of privacy in Natural Language Processing (NLP) systems that process sensitive data by introducing a novel framework combining differential privacy and adversarial training. The proposed solution ensures formal privacy guarantees by minimizing the influence of individual data points on the model’s behavior, effectively preventing information leakage. Simultaneously, adversarial training is applied to strengthen model robustness against privacy attacks by exposing it to adversarial examples during training. The framework is rigorously evaluated across various NLP tasks, demonstrating its capability to balance privacy preservation with high utility effectively. These results mark a significant advancement in developing secure and reliable NLP systems, particularly for applications requiring stringent data confidentiality, such as healthcare and finance. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|