Zobrazeno 1 - 10
of 31
pro vyhledávání: '"Jälkö, Joonas"'
We apply a state-of-the-art membership inference attack (MIA) to systematically test the practical privacy vulnerability of fine-tuning large image classification models. We focus on understanding the properties of data sets and samples that make the
Externí odkaz:
http://arxiv.org/abs/2402.06674
We study how the batch size affects the total gradient variance in differentially private stochastic gradient descent (DP-SGD), seeking a theoretical explanation for the usefulness of large batch sizes. As DP-SGD is the basis of modern DP deep learni
Externí odkaz:
http://arxiv.org/abs/2402.03990
Autor:
Tito, Rubèn, Nguyen, Khanh, Tobaben, Marlon, Kerkouche, Raouf, Souibgui, Mohamed Ali, Jung, Kangsoo, Jälkö, Joonas, D'Andecy, Vincent Poulain, Joseph, Aurelie, Kang, Lei, Valveny, Ernest, Honkela, Antti, Fritz, Mario, Karatzas, Dimosthenis
Document Visual Question Answering (DocVQA) has quickly grown into a central task of document understanding. But despite the fact that documents contain sensitive or copyrighted information, none of the current DocVQA methods offers strong privacy gu
Externí odkaz:
http://arxiv.org/abs/2312.10108
Consider a setting where multiple parties holding sensitive data aim to collaboratively learn population level statistics, but pooling the sensitive data sets is not possible. We propose a framework in which each party shares a differentially private
Externí odkaz:
http://arxiv.org/abs/2308.04755
Generating synthetic data, with or without differential privacy, has attracted significant attention as a potential solution to the dilemma between making data easily available, and the privacy of data subjects. Several works have shown that consiste
Externí odkaz:
http://arxiv.org/abs/2305.16795
Differentially private (DP) release of multidimensional statistics typically considers an aggregate sensitivity, e.g. the vector norm of a high-dimensional vector. However, different dimensions of that vector might have widely different magnitudes an
Externí odkaz:
http://arxiv.org/abs/2210.15961
While generation of synthetic data under differential privacy (DP) has received a lot of attention in the data privacy community, analysis of synthetic data has received much less. Existing work has shown that simply analysing DP synthetic data as if
Externí odkaz:
http://arxiv.org/abs/2205.14485
Autor:
Prediger, Lukas1 (AUTHOR) lukas.m.prediger@aalto.fi, Jälkö, Joonas1,2 (AUTHOR), Honkela, Antti2 (AUTHOR), Kaski, Samuel1,3 (AUTHOR)
Publikováno v:
BMC Medical Informatics & Decision Making. 6/14/2024, Vol. 24 Issue 1, p1-14. 14p.
In recent years, local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy. LDP provides client-side privacy by adding noise at the user's
Externí odkaz:
http://arxiv.org/abs/2110.14426
Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets. A large body of prior works that investigate GLMs under differential privacy (DP) cons
Externí odkaz:
http://arxiv.org/abs/2011.00467