Zobrazeno 1 - 10
of 263
pro vyhledávání: '"Reiter, Michael K."'
Cryptocurrency introduces usability challenges by requiring users to manage signing keys. Popular signing key management services (e.g., custodial wallets), however, either introduce a trusted party or burden users with managing signing key shares, p
Externí odkaz:
http://arxiv.org/abs/2407.16473
Auditing the use of data in training machine-learning (ML) models is an increasingly pressing challenge, as myriad ML practitioners routinely leverage the effort of content creators to train models without their permission. In this paper, we propose
Externí odkaz:
http://arxiv.org/abs/2407.15100
Autor:
Sheng, Peiyao, Wu, Chenyuan, Malkhi, Dahlia, Reiter, Michael K., Stathakopoulou, Chrysoula, Wei, Michael, Yin, Maofan
This paper introduces and develops the concept of ``ticketing'', through which atomic broadcasts are orchestrated by nodes in a distributed system. The paper studies different ticketing regimes that allow parallelism, yet prevent slow nodes from hamp
Externí odkaz:
http://arxiv.org/abs/2407.00030
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data. Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks, whe
Externí odkaz:
http://arxiv.org/abs/2405.06206
Autor:
Bandarupalli, Akhil, Bhat, Adithya, Bagchi, Saurabh, Kate, Aniket, Liu-Zhang, Chen-Da, Reiter, Michael K.
Agreement protocols are crucial in various emerging applications, spanning from distributed (blockchains) oracles to fault-tolerant cyber-physical systems. In scenarios where sensor/oracle nodes measure a common source, maintaining output within the
Externí odkaz:
http://arxiv.org/abs/2405.02431
Foundation model has become the backbone of the AI ecosystem. In particular, a foundation model can be used as a general-purpose feature extractor to build various downstream classifiers. However, foundation models are vulnerable to backdoor attacks
Externí odkaz:
http://arxiv.org/abs/2402.14977
Untrusted data used to train a model might have been manipulated to endow the learned model with hidden properties that the data contributor might later exploit. Data purification aims to remove such manipulations prior to training the model. We prop
Externí odkaz:
http://arxiv.org/abs/2312.01281
Honeywords are decoy passwords that can be added to a credential database; if a login attempt uses a honeyword, this indicates that the site's credential database has been leaked. In this paper we explore the basic requirements for honeywords to be e
Externí odkaz:
http://arxiv.org/abs/2309.10323
Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. Spec
Externí odkaz:
http://arxiv.org/abs/2306.16614
Autor:
Bhat, Adithya, Bandarupalli, Akhil, Nagaraj, Manish, Bagchi, Saurabh, Kate, Aniket, Reiter, Michael K.
Modern Byzantine Fault-Tolerant State Machine Replication (BFT-SMR) solutions focus on reducing communication complexity, improving throughput, or lowering latency. This work explores the energy efficiency of BFT-SMR protocols. First, we propose a no
Externí odkaz:
http://arxiv.org/abs/2304.04998