Towards Standardizing AI Bias Exploration

Autor: Krasanakis, Emmanouil, Papadopoulos, Symeon
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Creating fair AI systems is a complex problem that involves the assessment of context-dependent bias concerns. Existing research and programming libraries express specific concerns as measures of bias that they aim to constrain or mitigate. In practice, one should explore a wide variety of (sometimes incompatible) measures before deciding which ones warrant corrective action, but their narrow scope means that most new situations can only be examined after devising new measures. In this work, we present a mathematical framework that distils literature measures of bias into building blocks, hereby facilitating new combinations to cover a wide range of fairness concerns, such as classification or recommendation differences across multiple multi-value sensitive attributes (e.g., many genders and races, and their intersections). We show how this framework generalizes existing concepts and present frequently used blocks. We provide an open-source implementation of our framework as a Python library, called FairBench, that facilitates systematic and extensible exploration of potential bias concerns.
Comment: Workshop on AI bias: Measurements, Mitigation, Explanation Strategies (AIMMES 2024)
Databáze: arXiv