Zobrazeno 1 - 10
of 491
pro vyhledávání: '"PRAKASH, ATUL"'
Autor:
Zheng, Haizhong, Tsai, Elisa, Lu, Yifu, Sun, Jiachen, Bartoldson, Brian R., Kailkhura, Bhavya, Prakash, Atul
High-quality human-annotated data is crucial for modern deep learning pipelines, yet the human annotation process is both costly and time-consuming. Given a constrained human labeling budget, selecting an informative and representative data subset fo
Externí odkaz:
http://arxiv.org/abs/2406.04273
Autor:
Mangaokar, Neal, Hooda, Ashish, Choi, Jihye, Chandrashekaran, Shreyas, Fawaz, Kassem, Jha, Somesh, Prakash, Atul
Large language models (LLMs) are typically aligned to be harmless to humans. Unfortunately, recent work has shown that such models are susceptible to automated jailbreak attacks that induce them to generate harmful content. More recent LLMs often inc
Externí odkaz:
http://arxiv.org/abs/2402.15911
Autor:
Jin, Shuowei, Wu, Yongji, Zheng, Haizhong, Zhang, Qingzhao, Lentz, Matthew, Mao, Z. Morley, Prakash, Atul, Qian, Feng, Zhuo, Danyang
Large language models (LLMs) have seen significant adoption for natural language tasks, owing their success to massive numbers of model parameters (e.g., 70B+); however, LLM inference incurs significant computation and memory costs. Recent approaches
Externí odkaz:
http://arxiv.org/abs/2402.12280
Autor:
Zheng, Haizhong, Bai, Xiaoyan, Liu, Xueshen, Mao, Z. Morley, Chen, Beidi, Lai, Fan, Prakash, Atul
Publikováno v:
Advances in Neural Information Processing Systems (NeurIPS) 2024
Large Language Models (LLMs) have achieved remarkable success with their billion-level parameters, yet they incur high inference overheads. The emergence of activation sparsity in LLMs provides a natural approach to reduce this cost by involving only
Externí odkaz:
http://arxiv.org/abs/2402.06126
Autor:
Zheng, Haizhong, Sun, Jiachen, Wu, Shutong, Kailkhura, Bhavya, Mao, Zhuoqing, Xiao, Chaowei, Prakash, Atul
Publikováno v:
ECCV 2024
Given a real-world dataset, data condensation (DC) aims to synthesize a small synthetic dataset that captures the knowledge of a natural dataset while being usable for training models with comparable accuracy. Recent works propose to enhance DC with
Externí odkaz:
http://arxiv.org/abs/2310.07506
Adversarial examples threaten the integrity of machine learning systems with alarming success rates even under constrained black-box conditions. Stateful defenses have emerged as an effective countermeasure, detecting potential attacks by maintaining
Externí odkaz:
http://arxiv.org/abs/2307.16331
Perception is crucial in the realm of autonomous driving systems, where bird's eye view (BEV)-based architectures have recently reached state-of-the-art performance. The desirability of self-supervised representation learning stems from the expensive
Externí odkaz:
http://arxiv.org/abs/2306.00349
Recent work has proposed stateful defense models (SDMs) as a compelling strategy to defend against a black-box attacker who only has query access to the model, as is common for online machine learning platforms. Such stateful defenses aim to defend a
Externí odkaz:
http://arxiv.org/abs/2303.06280
One-shot coreset selection aims to select a representative subset of the training data, given a pruning rate, that can later be used to train future models while retaining high accuracy. State-of-the-art coreset selection methods pick the highest imp
Externí odkaz:
http://arxiv.org/abs/2210.15809