Zobrazeno 1 - 10
of 34
pro vyhledávání: '"Knolle, Moritz A."'
Autor:
Schwethelm, Kristian, Kaiser, Johannes, Knolle, Moritz, Rueckert, Daniel, Kaissis, Georgios, Ziller, Alexander
Image reconstruction attacks on machine learning models pose a significant risk to privacy by potentially leaking sensitive information. Although defending against such attacks using differential privacy (DP) has proven effective, determining appropr
Externí odkaz:
http://arxiv.org/abs/2403.07588
Quantifying the impact of individual data samples on machine learning models is an open research problem. This is particularly relevant when complex and high-dimensional relationships have to be learned from a limited sample of the data generating di
Externí odkaz:
http://arxiv.org/abs/2311.03075
Autor:
Meissen, Felix, Breuer, Svenja, Knolle, Moritz, Buyx, Alena, Müller, Ruth, Kaissis, Georgios, Wiestler, Benedikt, Rückert, Daniel
Background: With the ever-increasing amount of medical imaging data, the demand for algorithms to assist clinicians has amplified. Unsupervised anomaly detection (UAD) models promise to aid in the crucial first step of disease detection. While previo
Externí odkaz:
http://arxiv.org/abs/2309.14198
Differentially private SGD (DP-SGD) holds the promise of enabling the safe and responsible application of machine learning to sensitive datasets. However, DP-SGD only provides a biased, noisy estimate of a mini-batch gradient. This renders optimisati
Externí odkaz:
http://arxiv.org/abs/2308.12018
Autor:
Mueller, Tamara T., Kolek, Stefan, Jungmann, Friederike, Ziller, Alexander, Usynin, Dmitrii, Knolle, Moritz, Rueckert, Daniel, Kaissis, Georgios
Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database. More recently, extensions to individual subjects or their attributes, have been introduced. Under the individual/per-instance DP i
Externí odkaz:
http://arxiv.org/abs/2211.10173
Autor:
Mueller, Tamara T., Ziller, Alexander, Usynin, Dmitrii, Knolle, Moritz, Jungmann, Friederike, Rueckert, Daniel, Kaissis, Georgios
Differential privacy (DP) allows the quantification of privacy loss when the data of individuals is subjected to algorithmic processing such as machine learning, as well as the provision of objective privacy guarantees. However, while techniques such
Externí odkaz:
http://arxiv.org/abs/2109.10582
Autor:
Usynin, Dmitrii, Ziller, Alexander, Knolle, Moritz, Trask, Andrew, Prakash, Kritika, Rueckert, Daniel, Kaissis, Georgios
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML). Optimal noise calibration in this setting requires efficient Jacobian matrix computations and tight bounds
Externí odkaz:
http://arxiv.org/abs/2109.10573
Autor:
Kaissis, Georgios, Knolle, Moritz, Jungmann, Friederike, Ziller, Alexander, Usynin, Dmitrii, Rueckert, Daniel
The Gaussian mechanism (GM) represents a universally employed tool for achieving differential privacy (DP), and a large body of work has been devoted to its analysis. We argue that the three prevailing interpretations of the GM, namely $(\varepsilon,
Externí odkaz:
http://arxiv.org/abs/2109.10528
Autor:
Knolle, Moritz, Usynin, Dmitrii, Ziller, Alexander, Makowski, Marcus R., Rueckert, Daniel, Kaissis, Georgios
The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual. The predominant approach to
Externí odkaz:
http://arxiv.org/abs/2107.14582
Autor:
Knolle, Moritz, Ziller, Alexander, Usynin, Dmitrii, Braren, Rickmer, Makowski, Marcus R., Rueckert, Daniel, Kaissis, Georgios
We show that differentially private stochastic gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models. This represents a serious issue for safety-critical applications, e.g. in medical diagnosis. We highlight and ex
Externí odkaz:
http://arxiv.org/abs/2107.04296