Zobrazeno 1 - 10
of 43
pro vyhledávání: '"Rawat, Ambrish"'
Autor:
Cornacchia, Giandomenico, Zizzo, Giulio, Fraser, Kieran, Hameed, Muhammad Zaid, Rawat, Ambrish, Purcell, Mark
The proliferation of Large Language Models (LLMs) in diverse applications underscores the pressing need for robust security measures to thwart potential jailbreak attacks. These attacks exploit vulnerabilities within LLMs, endanger data integrity and
Externí odkaz:
http://arxiv.org/abs/2409.17699
Autor:
Rawat, Ambrish, Schoepf, Stefan, Zizzo, Giulio, Cornacchia, Giandomenico, Hameed, Muhammad Zaid, Fraser, Kieran, Miehling, Erik, Buesser, Beat, Daly, Elizabeth M., Purcell, Mark, Sattigeri, Prasanna, Chen, Pin-Yu, Varshney, Kush R.
As generative AI, particularly large language models (LLMs), become increasingly integrated into production applications, new attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal system
Externí odkaz:
http://arxiv.org/abs/2409.15398
Autor:
Achintalwar, Swapnaja, Garcia, Adriana Alvarado, Anaby-Tavor, Ateret, Baldini, Ioana, Berger, Sara E., Bhattacharjee, Bishwaranjan, Bouneffouf, Djallel, Chaudhury, Subhajit, Chen, Pin-Yu, Chiazor, Lamogha, Daly, Elizabeth M., DB, Kirushikesh, de Paula, Rogério Abreu, Dognin, Pierre, Farchi, Eitan, Ghosh, Soumya, Hind, Michael, Horesh, Raya, Kour, George, Lee, Ja Young, Madaan, Nishtha, Mehta, Sameep, Miehling, Erik, Murugesan, Keerthiram, Nagireddy, Manish, Padhi, Inkit, Piorkowski, David, Rawat, Ambrish, Raz, Orna, Sattigeri, Prasanna, Strobelt, Hendrik, Swaminathan, Sarathkrishna, Tillmann, Christoph, Trivedi, Aashka, Varshney, Kush R., Wei, Dennis, Witherspooon, Shalisha, Zalmanovici, Marcel
Large language models (LLMs) are susceptible to a variety of risks, from non-faithful output to biased and toxic generations. Due to several limiting factors surrounding LLMs (training cost, API access, data availability, etc.), it may not always be
Externí odkaz:
http://arxiv.org/abs/2403.06009
The recent breakthrough of Transformers in deep learning has drawn significant attention of the time series community due to their ability to capture long-range dependencies. However, like other deep learning models, Transformers face limitations in
Externí odkaz:
http://arxiv.org/abs/2401.06524
Training large language models (LLMs) is a costly endeavour in terms of time and computational resources. The large amount of training data used during the unsupervised pre-training phase makes it difficult to verify all data and, unfortunately, unde
Externí odkaz:
http://arxiv.org/abs/2312.07420
Autor:
Kadhe, Swanand Ravindra, Ludwig, Heiko, Baracaldo, Nathalie, King, Alan, Zhou, Yi, Houck, Keith, Rawat, Ambrish, Purcell, Mark, Holohan, Naoise, Takeuchi, Mikio, Kawahara, Ryo, Drucker, Nir, Shaul, Hayim, Kushnir, Eyal, Soceanu, Omri
The effective detection of evidence of financial anomalies requires collaboration among multiple entities who own a diverse set of data, such as a payment network system (PNS) and its partner banks. Trust among these financial institutions is limited
Externí odkaz:
http://arxiv.org/abs/2310.19304
The wide applicability and adaptability of generative large language models (LLMs) has enabled their rapid adoption. While the pre-trained models can perform many tasks, such models are often fine-tuned to improve their performance on various downstr
Externí odkaz:
http://arxiv.org/abs/2306.09308
In this work, we devise robust and efficient learning protocols for orchestrating a Federated Learning (FL) process for the Federated Tumor Segmentation Challenge (FeTS 2022). Enabling FL for FeTS setup is challenging mainly due to data heterogeneity
Externí odkaz:
http://arxiv.org/abs/2212.08290
With privacy legislation empowering the users with the right to be forgotten, it has become essential to make a model amenable for forgetting some of its training data. However, existing unlearning methods in the machine learning context can not be d
Externí odkaz:
http://arxiv.org/abs/2207.05521
Machine unlearning refers to the task of removing a subset of training data, thereby removing its contributions to a trained model. Approximate unlearning are one class of methods for this task which avoid the need to retrain the model from scratch o
Externí odkaz:
http://arxiv.org/abs/2207.03227