Abstrakt: |
We introduce a family of information leakage measures called maximal $(\alpha,\beta)$ -leakage ( $\text{M}\alpha $ beL), parameterized by real numbers $\alpha $ and $\beta $ greater than or equal to 1. The measure is formalized via an operational definition involving an adversary guessing an unknown (randomized) function of the data given the released data. We obtain a simplified computable expression for the measure and show that it satisfies several basic properties such as monotonicity in $\beta $ for a fixed $\alpha $ , non-negativity, data processing inequalities, and additivity over independent releases. We highlight the relevance of this family by showing that it bridges several known leakage measures, including maximal $\alpha $ -leakage $(\beta =1)$ , maximal leakage $(\alpha =\infty,\beta =1)$ , local differential privacy (LDP) $(\alpha =\infty,\beta =\infty)$ , and local Rényi differential privacy (LRDP) $(\alpha =\beta)$ , thereby giving an operational interpretation to local Rényi differential privacy. We also study a conditional version of $\text{M}\alpha $ beL on leveraging which we recover differential privacy and Rényi differential privacy. A new variant of LRDP, which we call maximal Rényi leakage, appears as a special case of $\text{M}\alpha $ beL for $\alpha =\infty $ that smoothly tunes between maximal leakage ( $\beta =1$ ) and LDP ( $\beta =\infty $ ). Finally, we show that a vector form of the maximal Rényi leakage relaxes differential privacy under Gaussian and Laplacian mechanisms. |