Zobrazeno 1 - 10
of 145
pro vyhledávání: '"Melis, Luca"'
Empirical auditing has emerged as a means of catching some of the flaws in the implementation of privacy-preserving algorithms. Existing auditing mechanisms, however, are either computationally inefficient requiring multiple runs of the machine learn
Externí odkaz:
http://arxiv.org/abs/2410.22235
The potential of transformer-based LLMs risks being hindered by privacy concerns due to their reliance on extensive datasets, possibly including sensitive information. Regulatory measures like GDPR and CCPA call for using robust auditing tools to add
Externí odkaz:
http://arxiv.org/abs/2406.16565
We present ReMasker, a new method of imputing missing values in tabular data by extending the masked autoencoding framework. Compared with prior work, ReMasker is both simple -- besides the missing values (i.e., naturally masked), we randomly ``re-ma
Externí odkaz:
http://arxiv.org/abs/2309.13793
This paper studies federated linear contextual bandits under the notion of user-level differential privacy (DP). We first introduce a unified federated bandits framework that can accommodate various definitions of DP in the sequential decision-making
Externí odkaz:
http://arxiv.org/abs/2306.05275
Privacy-Preserving machine learning (PPML) can help us train and deploy models that utilize private information. In particular, on-device machine learning allows us to avoid sharing raw data with a third-party server during inference. On-device model
Externí odkaz:
http://arxiv.org/abs/2305.12997
Autor:
Hejazinia, Meisam, Huba, Dzmitry, Leontiadis, Ilias, Maeng, Kiwan, Malek, Mani, Melis, Luca, Mironov, Ilya, Nasr, Milad, Wang, Kaikai, Wu, Carole-Jean
Federated learning (FL) has emerged as an effective approach to address consumer privacy needs. FL has been successfully applied to certain machine learning tasks, such as training smart keyboard models and keyword spotting. Despite FL's initial succ
Externí odkaz:
http://arxiv.org/abs/2206.03852
Federated learning (FL) is an effective mechanism for data privacy in recommender systems by running machine learning model training on-device. While prior FL optimizations tackled the data and system heterogeneity challenges faced by FL, they assume
Externí odkaz:
http://arxiv.org/abs/2206.02633
Autor:
Aydore, Sergul, Brown, William, Kearns, Michael, Kenthapadi, Krishnaram, Melis, Luca, Roth, Aaron, Siva, Ankit
We propose, implement, and evaluate a new algorithm for releasing answers to very large numbers of statistical queries like $k$-way marginals, subject to differential privacy. Our algorithm makes adaptive use of a continuous relaxation of the Project
Externí odkaz:
http://arxiv.org/abs/2103.06641
Robustness of machine learning models is critical for security related applications, where real-world adversaries are uniquely focused on evading neural network based detectors. Prior work mainly focus on crafting adversarial examples (AEs) with smal
Externí odkaz:
http://arxiv.org/abs/2102.12002
Autor:
Melis, Luca
Large-scale data processing prompts a number of important challenges, including guaranteeing that collected or published data is not misused, preventing disclosure of sensitive information, and deploying privacy protection frameworks that support usa
Externí odkaz:
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.756261