Zobrazeno 1 - 10
of 92
pro vyhledávání: '"Xu, Yuancheng"'
Autor:
Xu, Yuancheng, Sehwag, Udari Madhushani, Koppel, Alec, Zhu, Sicheng, An, Bang, Huang, Furong, Ganesh, Sumitra
Large Language Models (LLMs) exhibit impressive capabilities but require careful alignment with human preferences. Traditional training-time methods finetune LLMs using human preference datasets but incur significant training costs and require repeat
Externí odkaz:
http://arxiv.org/abs/2410.08193
Data augmentation, a cornerstone technique in deep learning, is crucial in enhancing model performance, especially with scarce labeled data. While traditional techniques are effective, their reliance on hand-crafted methods limits their applicability
Externí odkaz:
http://arxiv.org/abs/2410.02512
Autor:
An, Bang, Zhu, Sicheng, Zhang, Ruiyi, Panaitescu-Liess, Michael-Andrei, Xu, Yuancheng, Huang, Furong
Safety-aligned large language models (LLMs) sometimes falsely refuse pseudo-harmful prompts, like "how to kill a mosquito," which are actually harmless. Frequent false refusals not only frustrate users but also provoke a public backlash against the v
Externí odkaz:
http://arxiv.org/abs/2409.00598
Autor:
Panaitescu-Liess, Michael-Andrei, Che, Zora, An, Bang, Xu, Yuancheng, Pathmanathan, Pankayaraj, Chakraborty, Souradip, Zhu, Sicheng, Goldstein, Tom, Huang, Furong
Large Language Models (LLMs) have demonstrated impressive capabilities in generating diverse and contextually rich text. However, concerns regarding copyright infringement arise as LLMs may inadvertently produce copyrighted material. In this paper, w
Externí odkaz:
http://arxiv.org/abs/2407.17417
Autor:
Ding, Mucong, Xu, Yuancheng, Rabbani, Tahseen, Liu, Xiaoyu, Gravelle, Brian, Ranadive, Teresa, Tuan, Tai-Ching, Huang, Furong
Dataset condensation can be used to reduce the computational cost of training multiple models on a large dataset by condensing the training dataset into a small synthetic set. State-of-the-art approaches rely on matching the model gradients between t
Externí odkaz:
http://arxiv.org/abs/2405.17535
Autor:
Xu, Yuancheng, Yao, Jiarui, Shu, Manli, Sun, Yanchao, Wu, Zichu, Yu, Ning, Goldstein, Tom, Huang, Furong
Vision-Language Models (VLMs) excel in generating textual responses from visual inputs, but their versatility raises security concerns. This study takes the first step in exposing VLMs' susceptibility to data poisoning attacks that can manipulate res
Externí odkaz:
http://arxiv.org/abs/2402.06659
Autor:
Wang, Xiyao, Zhou, Yuhang, Liu, Xiaoyu, Lu, Hongjin, Xu, Yuancheng, He, Feihong, Yoon, Jaehong, Lu, Taixi, Bertasius, Gedas, Bansal, Mohit, Yao, Huaxiu, Huang, Furong
Multimodal Large Language Models (MLLMs) have demonstrated proficiency in handling a variety of visual-language tasks. However, current MLLM benchmarks are predominantly designed to evaluate reasoning based on static information about a single image,
Externí odkaz:
http://arxiv.org/abs/2401.10529
Autor:
An, Bang, Ding, Mucong, Rabbani, Tahseen, Agrawal, Aakriti, Xu, Yuancheng, Deng, Chenghao, Zhu, Sicheng, Mohamed, Abdirisak, Wen, Yuxin, Goldstein, Tom, Huang, Furong
In the burgeoning age of generative AI, watermarks act as identifiers of provenance and artificial content. We present WAVES (Watermark Analysis Via Enhanced Stress-testing), a benchmark for assessing image watermark robustness, overcoming the limita
Externí odkaz:
http://arxiv.org/abs/2401.08573
Representation learning assumes that real-world data is generated by a few semantically meaningful generative factors (i.e., sources of variation) and aims to discover them in the latent space. These factors are expected to be causally disentangled,
Externí odkaz:
http://arxiv.org/abs/2310.17325
Autor:
Xu, Yuancheng, Deng, Chenghao, Sun, Yanchao, Zheng, Ruijie, Wang, Xiyao, Zhao, Jieyu, Huang, Furong
Decisions made by machine learning models can have lasting impacts, making long-term fairness a critical consideration. It has been observed that ignoring the long-term effect and directly applying fairness criterion in static settings can actually w
Externí odkaz:
http://arxiv.org/abs/2309.03426