Autor: |
Waseda, Atsushi, Nojima, Ryo, Wang, Lihua |
Předmět: |
|
Zdroj: |
Applied Sciences (2076-3417); Sep2024, Vol. 14 Issue 17, p7625, 15p |
Abstrakt: |
This paper focuses on the relationship between decision trees, a typical machine learning method, and data anonymization. It is known that information leaked from trained decision trees can be evaluated using well-studied data anonymization techniques and that decision trees can be strengthened using k-anonymity and ℓ-diversity; unfortunately, however, this does not seem sufficient for differential privacy. In this paper, we show how one might apply k-anonymity to a (random) decision tree, which is a variant of the decision tree. Surprisingly, this results in differential privacy, which means that security is amplified from k-anonymity to differential privacy without the addition of noise. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|