Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Pei, Hengzhi"'
Backdoor attacks have become a major security threat for deploying machine learning models in security-critical applications. Existing research endeavors have proposed many defenses against backdoor attacks. Despite demonstrating certain empirical de
Externí odkaz:
http://arxiv.org/abs/2311.11225
Autor:
Wang, Boxin, Chen, Weixin, Pei, Hengzhi, Xie, Chulin, Kang, Mintong, Zhang, Chenhui, Xu, Chejian, Xiong, Zidi, Dutta, Ritik, Schaeffer, Rylan, Truong, Sang T., Arora, Simran, Mazeika, Mantas, Hendrycks, Dan, Lin, Zinan, Cheng, Yu, Koyejo, Sanmi, Song, Dawn, Li, Bo
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, prac
Externí odkaz:
http://arxiv.org/abs/2306.11698
Pretrained code language models have enabled great progress towards program synthesis. However, common approaches only consider in-file local context and thus miss information and constraints imposed by other parts of the codebase and its external de
Externí odkaz:
http://arxiv.org/abs/2306.00381
Humans can classify data of an unseen category by reasoning on its language explanations. This ability is owing to the compositional nature of language: we can combine previously seen attributes to describe the new category. For example, we might des
Externí odkaz:
http://arxiv.org/abs/2211.03252
Autoregressive generative models are commonly used, especially for those tasks involving sequential data. They have, however, been plagued by a slew of inherent flaws due to the intrinsic characteristics of chain-style conditional modeling (e.g., exp
Externí odkaz:
http://arxiv.org/abs/2206.12840
Time series data generation has drawn increasing attention in recent years. Several generative adversarial network (GAN) based methods have been proposed to tackle the problem usually with the assumption that the targeted time series data are well-fo
Externí odkaz:
http://arxiv.org/abs/2111.08386
Autor:
Yang, Zhuolin, Zhao, Zhikuan, Wang, Boxin, Zhang, Jiawei, Li, Linyi, Pei, Hengzhi, Karlas, Bojan, Liu, Ji, Guo, Heng, Zhang, Ce, Li, Bo
Intensive algorithmic efforts have been made to enable the rapid improvements of certificated robustness for complex ML models recently. However, current robustness certification methods are only able to certify under a limited perturbation radius. G
Externí odkaz:
http://arxiv.org/abs/2003.00120
Portfolio management (PM) is a fundamental financial planning task that aims to achieve investment goals such as maximal profits or minimal risks. Its decision process involves continuous derivation of valuable information from various data sources a
Externí odkaz:
http://arxiv.org/abs/2002.05780
Adversarial attacks against natural language processing systems, which perform seemingly innocuous modifications to inputs, can induce arbitrary mistakes to the target models. Though raised great concerns, such adversarial attacks can be leveraged to
Externí odkaz:
http://arxiv.org/abs/1912.10375
This paper studies model-inversion attacks, in which the access to a model is abused to infer information about the training data. Since its first introduction, such attacks have raised serious concerns given that training data usually contain privac
Externí odkaz:
http://arxiv.org/abs/1911.07135