Does $k$ -Anonymous Microaggregation Affect Machine-Learned Macrotrends?

Autor: Javier Parra-Arnau, Ana Rodriguez-Hoyos, José Estrada-Jiménez, David Rebollo-Monedero, Jordi Forné
Rok vydání: 2018
Předmět:
Zdroj: IEEE Access. 6:28258-28277
ISSN: 2169-3536
DOI: 10.1109/access.2018.2834858
Popis: In the era of big data, the availability of massive amounts of information makes privacy protection more necessary than ever. Among a variety of anonymization mechanisms, microaggregation is a common approach to satisfy the popular requirement of $k$ -anonymity in statistical databases. In essence, $k$ -anonymous microaggregation aggregates quasi-identifiers to hide the identity of each data subject within a group of other $k-1$ subjects. As any perturbative mechanism, however, anonymization comes at the cost of some information loss that may hinder the ulterior purpose of the released data, which very often is building machine-learning models for macrotrends analysis. To assess the impact of microaggregation on the utility of the anonymized data, it is necessary to evaluate the resulting accuracy of said models. In this paper, we address the problem of measuring the effect of $k$ -anonymous microaggregation on the empirical utility of microdata. We quantify utility accordingly as the accuracy of classification models learned from microaggregated data, and evaluated over original test data. Our experiments indicate, with some consistency, that the impact of the de facto microaggregation standard (maximum distance to average vector) on the performance of machine-learning algorithms is often minor to negligible for a wide range of $k$ for a variety of classification algorithms and data sets. Furthermore, experimental evidences suggest that the traditional measure of distortion in the community of microdata anonymization may be inappropriate for evaluating the utility of microaggregated data.
Databáze: OpenAIRE