Autor: |
Bozdag, Mustafa, Sevim, Nurullah, Koç, Aykut |
Předmět: |
|
Zdroj: |
ACM Transactions on Knowledge Discovery from Data; May2024, Vol. 18 Issue 4, p1-26, 26p |
Abstrakt: |
Transformer-based contextualized language models constitute the state-of-the-art in several natural language processing (NLP) tasks and applications. Despite their utility, contextualized models can contain human-like social biases, as their training corpora generally consist of human-generated text. Evaluating and removing social biases in NLP models has been a major research endeavor. In parallel, NLP approaches in the legal domain, namely, legal NLP or computational law, have also been increasing. Eliminating unwanted bias in legal NLP is crucial, since the law has the utmost importance and effect on people. In this work, we focus on the gender bias encoded in BERT-based models. We propose a new template-based bias measurement method with a new bias evaluation corpus using crime words from the FBI database. This method quantifies the gender bias present in BERT-based models for legal applications. Furthermore, we propose a new fine-tuning-based debiasing method using the European Court of Human Rights (ECtHR) corpus to debias legal pre-trained models. We test the debiased models' language understanding performance on the LexGLUE benchmark to confirm that the underlying semantic vector space is not perturbed during the debiasing process. Finally, we propose a bias penalty for the performance scores to emphasize the effect of gender bias on model performance. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|