Zobrazeno 1 - 10
of 3 669
pro vyhledávání: '"Asokan, N."'
Machine learning (ML) defenses protect against various risks to security, privacy, and fairness. Real-life models need simultaneous protection against multiple different risks which necessitates combining multiple defenses. But combining defenses wit
Externí odkaz:
http://arxiv.org/abs/2411.09776
Regulations increasingly call for various assurances from machine learning (ML) model providers about their training data, training process, and the behavior of resulting models during inference. For better transparency, companies (e.g., Huggingface
Externí odkaz:
http://arxiv.org/abs/2406.17548
Outsourced computation presents a risk to the confidentiality of clients' sensitive data since they have to trust that the service providers will not mishandle this data. Blinded Memory (BliMe) is a set of hardware extensions that addresses this prob
Externí odkaz:
http://arxiv.org/abs/2406.15302
Autor:
ElAtali, Hossam, Asokan, N.
Speculation is fundamental to achieving high CPU performance, yet it enables vulnerabilities such as Spectre attacks, which remain a significant challenge to mitigate without incurring substantial performance overheads. These attacks typically unfold
Externí odkaz:
http://arxiv.org/abs/2406.12110
Diffusion based text-to-image models are trained on large datasets scraped from the Internet, potentially containing unacceptable concepts (e.g., copyright infringing or unsafe). We need concept removal techniques (CRTs) which are effective in preven
Externí odkaz:
http://arxiv.org/abs/2404.19227
Use-after-free (UAF) is a critical and prevalent problem in memory unsafe languages. While many solutions have been proposed, balancing security, run-time cost, and memory overhead (an impossible trinity) is hard. In this paper, we show one way to ba
Externí odkaz:
http://arxiv.org/abs/2402.03373
Attacks on heap memory, encompassing memory overflow, double and invalid free, use-after-free (UAF), and various heap spraying techniques are ever-increasing. Existing entropy-based secure memory allocators provide statistical defenses against virtua
Externí odkaz:
http://arxiv.org/abs/2402.01894
Publikováno v:
IEEE International Symposium on Hardware Oriented Security and Trust (HOST), 2024, pp. 373-377
Outsourced computation can put client data confidentiality at risk. Existing solutions are either inefficient or insufficiently secure: cryptographic techniques like fully-homomorphic encryption incur significant overheads, even with hardware assista
Externí odkaz:
http://arxiv.org/abs/2401.16583
Recent initiatives known as Future Internet Architectures (FIAs) seek to redesign the Internet to improve performance, scalability, and security. However, some governments perceive Internet access as a threat to their political standing and engage in
Externí odkaz:
http://arxiv.org/abs/2401.15828
Machine learning (ML) models cannot neglect risks to security, privacy, and fairness. Several defenses have been proposed to mitigate such risks. When a defense is effective in mitigating one risk, it may correspond to increased or decreased suscepti
Externí odkaz:
http://arxiv.org/abs/2312.04542