Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Banerjee, Sarbartha"'
Autor:
Banerjee, Sarbartha, Sahu, Prateek, Luo, Mulong, Vahldiek-Oberwagner, Anjo, Yadwadkar, Neeraja J., Tiwari, Mohit
Large language models (LLMs) used across enterprises often use proprietary models and operate on sensitive inputs and data. The wide range of attack vectors identified in prior research - targeting various software and hardware components used in tra
Externí odkaz:
http://arxiv.org/abs/2411.13459
Trusted execution environments (TEEs) for machine learning accelerators are indispensable in secure and efficient ML inference. Optimizing workloads through state-space exploration for the accelerator architectures improves performance and energy con
Externí odkaz:
http://arxiv.org/abs/2409.02817
Retrieval augmented generation (RAG) is a process where a large language model (LLM) retrieves useful information from a database and then generates the responses. It is becoming popular in enterprise settings for daily business operations. For examp
Externí odkaz:
http://arxiv.org/abs/2408.04870
Accelerators used for machine learning (ML) inference provide great performance benefits over CPUs. Securing confidential model in inference against off-chip side-channel attacks is critical in harnessing the performance advantage in practice. Data a
Externí odkaz:
http://arxiv.org/abs/2110.07157
Hardware-enclaves that target complex CPU designs compromise both security and performance. Programs have little control over micro-architecture, which leads to side-channel leaks, and then have to be transformed to have worst-case control- and data-
Externí odkaz:
http://arxiv.org/abs/2007.06751
Publikováno v:
Digital Threats: Research & Practice; Jun2024, Vol. 5 Issue 2, p1-27, 27p