Zobrazeno 1 - 10
of 269
pro vyhledávání: '"Vechev, Martin"'
A key challenge of quantum programming is uncomputation: the reversible deallocation of qubits. And while there has been much recent progress on automating uncomputation, state-of-the-art methods are insufficient for handling today's expressive quant
Externí odkaz:
http://arxiv.org/abs/2406.14227
Rigorous software testing is crucial for developing and maintaining high-quality code, making automated test generation a promising avenue for both improving software quality and boosting the effectiveness of code generation methods. However, while c
Externí odkaz:
http://arxiv.org/abs/2406.12952
Recently, powerful Large Language Models (LLMs) have become easily accessible to hundreds of millions of users worldwide. However, their strong capabilities and vast world knowledge do not come without associated privacy risks. In this work, we focus
Externí odkaz:
http://arxiv.org/abs/2406.07217
Training certifiably robust neural networks is an important but challenging task. While many algorithms for (deterministic) certified training have been proposed, they are often evaluated on different training schedules, certification methods, and sy
Externí odkaz:
http://arxiv.org/abs/2406.04848
The goal of Fair Representation Learning (FRL) is to mitigate biases in machine learning models by learning data representations that enable high accuracy on downstream tasks while minimizing discrimination based on sensitive attributes. The evaluati
Externí odkaz:
http://arxiv.org/abs/2405.18161
Quantization leverages lower-precision weights to reduce the memory usage of large language models (LLMs) and is a key technique for enabling their deployment on commodity hardware. While LLM quantization's impact on utility has been extensively expl
Externí odkaz:
http://arxiv.org/abs/2405.18137
Public benchmarks play an essential role in the evaluation of large language models. However, data contamination can lead to inflated performance, rendering them unreliable for model comparison. It is therefore crucial to detect contamination and est
Externí odkaz:
http://arxiv.org/abs/2405.16281
Federated learning works by aggregating locally computed gradients from multiple clients, thus enabling collaborative training without sharing private client data. However, prior work has shown that the data can actually be recovered by the server us
Externí odkaz:
http://arxiv.org/abs/2405.15586
As large language models (LLMs) become ubiquitous in our daily tasks and digital interactions, associated privacy risks are increasingly in focus. While LLM privacy research has primarily focused on the leakage of model training data, it has recently
Externí odkaz:
http://arxiv.org/abs/2404.10618
Autor:
Balauca, Stefan, Müller, Mark Niklas, Mao, Yuhao, Baader, Maximilian, Fischer, Marc, Vechev, Martin
Training neural networks with high certified accuracy against adversarial examples remains an open problem despite significant efforts. While certification methods can effectively leverage tight convex relaxations for bound computation, in training,
Externí odkaz:
http://arxiv.org/abs/2403.07095