Zobrazeno 1 - 10
of 36
pro vyhledávání: '"Sitawarin, Chawin"'
Autor:
Ding, Yangruibo, Fu, Yanjun, Ibrahim, Omniyyah, Sitawarin, Chawin, Chen, Xinyun, Alomair, Basel, Wagner, David, Ray, Baishakhi, Chen, Yizheng
In the context of the rising interest in code language models (code LMs) and vulnerability detection, we study the effectiveness of code LMs for detecting vulnerabilities. Our analysis reveals significant shortcomings in existing vulnerability datase
Externí odkaz:
http://arxiv.org/abs/2403.18624
Large Language Models (LLMs) have surged in popularity in recent months, but they have demonstrated concerning capabilities to generate harmful content when manipulated. While techniques like safety fine-tuning aim to minimize harmful use, recent wor
Externí odkaz:
http://arxiv.org/abs/2402.09674
Recent advances in Large Language Models (LLMs) enable exciting LLM-integrated applications, which perform text-based tasks by utilizing their advanced language understanding capabilities. However, as LLMs have improved, so have the attacks against t
Externí odkaz:
http://arxiv.org/abs/2402.06363
Autor:
Piet, Julien, Alrashed, Maha, Sitawarin, Chawin, Chen, Sizhe, Wei, Zeming, Sun, Elizabeth, Alomair, Basel, Wagner, David
Large Language Models (LLMs) are attracting significant research attention due to their instruction-following abilities, allowing users and developers to leverage LLMs for a variety of tasks. However, LLMs are vulnerable to prompt-injection attacks:
Externí odkaz:
http://arxiv.org/abs/2312.17673
The capabilities of large language models have grown significantly in recent years and so too have concerns about their misuse. It is important to be able to distinguish machine-generated text from human-authored content. Prior works have proposed nu
Externí odkaz:
http://arxiv.org/abs/2312.00273
Adversarial attacks have been a looming and unaddressed threat in the industry. However, through a decade-long history of the robustness evaluation literature, we have learned that mounting a strong or optimal attack is challenging. It requires both
Externí odkaz:
http://arxiv.org/abs/2310.17645
Existing works have made great progress in improving adversarial robustness, but typically test their method only on data from the same distribution as the training data, i.e. in-distribution (ID) testing. As a result, it is unclear how such robustne
Externí odkaz:
http://arxiv.org/abs/2310.12793
Autor:
Shah, Kathan, Sitawarin, Chawin
We present a neural network architecture designed to naturally learn a positional embedding and overcome the spectral bias towards lower frequencies faced by conventional implicit neural representation networks. Our proposed architecture, SPDER, is a
Externí odkaz:
http://arxiv.org/abs/2306.15242
Machine learning models are known to be susceptible to adversarial perturbation. One famous attack is the adversarial patch, a sticker with a particularly crafted pattern that makes the model incorrectly predict the object it is placed on. This attac
Externí odkaz:
http://arxiv.org/abs/2212.05680
Decision-based attacks construct adversarial examples against a machine learning (ML) model by making only hard-label queries. These attacks have mainly been applied directly to standalone neural networks. However, in practice, ML models are just one
Externí odkaz:
http://arxiv.org/abs/2210.03297