Zobrazeno 1 - 10
of 37
pro vyhledávání: '"Nicolas Papernot"'
Autor:
Congyu Fang, Adam Dziedzic, Lin Zhang, Laura Oliva, Amol Verma, Fahad Razak, Nicolas Papernot, Bo Wang
Publikováno v:
EBioMedicine, Vol 101, Iss , Pp 105006- (2024)
Summary: Background: Machine Learning (ML) has demonstrated its great potential on medical data analysis. Large datasets collected from diverse sources and settings are essential for ML models in healthcare to achieve better accuracy and generalizabi
Externí odkaz:
https://doaj.org/article/a4339587684a45138530983b0f66431f
Autor:
Vijay Veerabadran, Josh Goldman, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alexey Kurakin, Ian Goodfellow, Jonathon Shlens, Jascha Sohl-Dickstein, Michael C. Mozer, Gamaleldin F. Elsayed
Publikováno v:
Nature Communications, Vol 14, Iss 1, Pp 1-12 (2023)
Abstract Although artificial neural networks (ANNs) were inspired by the brain, ANNs exhibit a brittleness not generally observed in human perception. One shortcoming of ANNs is their susceptibility to adversarial perturbations—subtle modulations o
Externí odkaz:
https://doaj.org/article/e4a0c04dd90842a19effeed01013b872
Autor:
Vijay Veerabadran, Josh Goldman, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alexey Kurakin, Ian Goodfellow, Jonathon Shlens, Jascha Sohl-Dickstein, Michael C. Mozer, Gamaleldin F. Elsayed
Publikováno v:
Nature Communications, Vol 15, Iss 1, Pp 1-1 (2024)
Externí odkaz:
https://doaj.org/article/0d807a316fcd40d5a093c7d845c56fd2
Publikováno v:
Proceedings on Privacy Enhancing Technologies. 2023:307-320
Differentially Private Stochastic Gradient Descent, DP-SGD, is the canonical approach to training deep neural networks with guarantees of Differential Privacy (DP). However, the modifications DP-SGD introduces to vanilla gradient descent negatively i
Autor:
Ali Shahin Shamsabadi, Brij Mohan Lal Srivastava, Aurélien Bellet, Nathalie Vauquier, Emmanuel Vincent, Mohamed Maouche, Marc Tommasi, Nicolas Papernot
Publikováno v:
Proceedings on Privacy Enhancing Technologies
Proceedings on Privacy Enhancing Technologies, 2023, 2023 (1)
Proceedings on Privacy Enhancing Technologies, 2023, 2023 (1), ⟨10.48550/arXiv.2202.11823⟩
Proceedings on Privacy Enhancing Technologies, 2023, 2023 (1)
Proceedings on Privacy Enhancing Technologies, 2023, 2023 (1), ⟨10.48550/arXiv.2202.11823⟩
International audience; Sharing real-world speech utterances is key to the training and deployment of voice-based services. However, it also raises privacy risks as speech contains a wealth of personal data. Speaker anonymization aims to remove speak
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c43fc58d17be944b82095e3f6520e3b5
https://inria.hal.science/hal-03588932
https://inria.hal.science/hal-03588932
Autor:
Nicolas Papernot
Publikováno v:
Proceedings of the 9th ACM Workshop on Moving Target Defense.
Publikováno v:
2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW).
Publikováno v:
EuroS&P
The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While this enabled us to train large-scale neural networks in datacenters and deploy them on edge devices, the focus so far
Publikováno v:
Boucher, N, Shumailov, I, Anderson, R & Papernot, N 2022, Bad Characters: Imperceptible NLP Attacks . in Proceedings of the 43rd IEEE Symposium on Security and Privacy, SP 2022 . 2022 IEEE Symposium on Security and Privacy (SP), pp. 1987-2004, 43rd IEEE Symposium on Security and Privacy, San Francisco, California, United States, 23/05/22 . https://doi.org/DOI: 10.1109/SP46214.2022.9833641
Several years of research have shown that machine-learning systems are vulnerable to adversarial examples, both in theory and in practice. Until now, such attacks have primarily targeted visual models, exploiting the gap between human and machine per
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3e2997b8820496b6db29d9c29f873e1b
http://arxiv.org/abs/2106.09898
http://arxiv.org/abs/2106.09898
Autor:
Hui Xu, Guanpeng Li, Homa Alemzadeh, Rakesh Bobba, Varun Chandrasekaran, David E. Evans, Nicolas Papernot, Karthik Pattabiraman, Florian Tramer
Publikováno v:
2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W).