Zobrazeno 1 - 10
of 5 910
pro vyhledávání: '"Ziller, A."'
The verification throughput is becoming a major challenge bottleneck, since the complexity and size of SoC designs are still ever increasing. Simply adding more CPU cores and running more tests in parallel will not scale anymore. This paper discusses
Externí odkaz:
http://arxiv.org/abs/2405.17481
We classify curvature homogeneous hypersurfaces in S^4 and H^4. In higher dimesnsion one only has the FKM examples and an isolate one by Tsukada of a hypersurface in H^5. Besides some simple examples, we show that there exists an isolated hypersurfac
Externí odkaz:
http://arxiv.org/abs/2404.02302
Autor:
Schwethelm, Kristian, Kaiser, Johannes, Knolle, Moritz, Rueckert, Daniel, Kaissis, Georgios, Ziller, Alexander
Image reconstruction attacks on machine learning models pose a significant risk to privacy by potentially leaking sensitive information. Although defending against such attacks using differential privacy (DP) has proven effective, determining appropr
Externí odkaz:
http://arxiv.org/abs/2403.07588
Autor:
Ziller, Alexander, Riess, Anneliese, Schwethelm, Kristian, Mueller, Tamara T., Rueckert, Daniel, Kaissis, Georgios
Reconstruction attacks on machine learning (ML) models pose a strong risk of leakage of sensitive data. In specific contexts, an adversary can (almost) perfectly reconstruct training data samples from a trained model using the model's gradients. When
Externí odkaz:
http://arxiv.org/abs/2402.12861
Unsupervised anomaly detection (UAD) alleviates large labeling efforts by training exclusively on unlabeled in-distribution data and detecting outliers as anomalies. Generally, the assumption prevails that large training datasets allow the training o
Externí odkaz:
http://arxiv.org/abs/2312.03804
Autor:
Ziller, Alexander, Mueller, Tamara T., Stieger, Simon, Feiner, Leonhard, Brandt, Johannes, Braren, Rickmer, Rueckert, Daniel, Kaissis, Georgios
Artificial Intelligence (AI) models are vulnerable to information leakage of their training data, which can be highly sensitive, for example in medical imaging. Privacy Enhancing Technologies (PETs), such as Differential Privacy (DP), aim to circumve
Externí odkaz:
http://arxiv.org/abs/2312.04590
Autor:
Ziller, Mario
We pursue the question how integers can be ordered or partitioned according to their divisibility properties. Based on pseudometrics on $\mathbb{Z}$, we investigate induced preorders, associated equivalence relations, and quotient sets. The focus is
Externí odkaz:
http://arxiv.org/abs/2310.15628
Autor:
Pulemotov, Artem, Ziller, Wolfgang
We obtain a complete description of divergent Palais-Smale sequences for the prescribed Ricci curvature functional on compact homogeneous spaces. As an application, we prove the existence of saddle points on generalized Wallach spaces and several typ
Externí odkaz:
http://arxiv.org/abs/2309.08090
Differentially private SGD (DP-SGD) holds the promise of enabling the safe and responsible application of machine learning to sensitive datasets. However, DP-SGD only provides a biased, noisy estimate of a mini-batch gradient. This renders optimisati
Externí odkaz:
http://arxiv.org/abs/2308.12018
Autor:
Mueller, Tamara T., Zhou, Siyu, Starck, Sophie, Jungmann, Friederike, Ziller, Alexander, Aksoy, Orhun, Movchan, Danylo, Braren, Rickmer, Kaissis, Georgios, Rueckert, Daniel
Body fat volume and distribution can be a strong indication for a person's overall health and the risk for developing diseases like type 2 diabetes and cardiovascular diseases. Frequently used measures for fat estimation are the body mass index (BMI)
Externí odkaz:
http://arxiv.org/abs/2308.02493