Zobrazeno 1 - 10
of 22
pro vyhledávání: '"Baader, Maximilian"'
Modern machine learning pipelines leverage large amounts of public data, making it infeasible to guarantee data quality and leaving models open to poisoning and backdoor attacks. However, provably bounding model behavior under such attacks remains an
Externí odkaz:
http://arxiv.org/abs/2406.05670
Federated learning works by aggregating locally computed gradients from multiple clients, thus enabling collaborative training without sharing private client data. However, prior work has shown that the data can actually be recovered by the server us
Externí odkaz:
http://arxiv.org/abs/2405.15586
Autor:
Balauca, Stefan, Müller, Mark Niklas, Mao, Yuhao, Baader, Maximilian, Fischer, Marc, Vechev, Martin
Training neural networks with high certified accuracy against adversarial examples remains an open problem despite significant efforts. While certification methods can effectively leverage tight convex relaxations for bound computation, in training,
Externí odkaz:
http://arxiv.org/abs/2403.07095
Federated learning is a framework for collaborative machine learning where clients only share gradient updates and not their private data with a server. However, it was recently shown that gradient inversion attacks can reconstruct this data from the
Externí odkaz:
http://arxiv.org/abs/2403.03945
Large language models are widespread, with their performance on benchmarks frequently guiding user preferences for one model over another. However, the vast amount of data these models are trained on can inadvertently lead to contamination with publi
Externí odkaz:
http://arxiv.org/abs/2402.02823
Convex relaxations are a key component of training and certifying provably safe neural networks. However, despite substantial progress, a wide and poorly understood accuracy gap to standard networks remains, raising the question of whether this is du
Externí odkaz:
http://arxiv.org/abs/2311.04015
Publikováno v:
Quantum 7, 1185 (2023)
Stabilizer simulation can efficiently simulate an important class of quantum circuits consisting exclusively of Clifford gates. However, all existing extensions of this simulation to arbitrary quantum circuits including non-Clifford gates suffer from
Externí odkaz:
http://arxiv.org/abs/2304.00921
Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning. However, despite substantial efforts, pro
Externí odkaz:
http://arxiv.org/abs/2112.05235
Fair representation learning transforms user data into a representation that ensures fairness and utility regardless of the downstream application. However, learning individually fair representations, i.e., guaranteeing that similar individuals are t
Externí odkaz:
http://arxiv.org/abs/2111.13650
We present a new certification method for image and point cloud segmentation based on randomized smoothing. The method leverages a novel scalable algorithm for prediction and certification that correctly accounts for multiple testing, necessary for e
Externí odkaz:
http://arxiv.org/abs/2107.00228