Zobrazeno 1 - 10
of 243
pro vyhledávání: '"Weijie J"'
Autor:
He, Hangfeng, Su, Weijie J.
Large language models (LLMs) have been widely employed across various application domains, yet their black-box nature poses significant challenges to understanding how these models process input data internally to make predictions. In this paper, we
Externí odkaz:
http://arxiv.org/abs/2408.13442
Autor:
Su, Buxin, Zhang, Jiayao, Collina, Natalie, Yan, Yuling, Li, Didong, Cho, Kyunghyun, Fan, Jianqing, Roth, Aaron, Su, Weijie J.
We conducted an experiment during the review process of the 2023 International Conference on Machine Learning (ICML) that requested authors with multiple submissions to rank their own papers based on perceived quality. We received 1,342 rankings, eac
Externí odkaz:
http://arxiv.org/abs/2408.13430
Publikováno v:
Quantum, Vol 7, p 1030 (2023)
Classical algorithms are often not effective for solving nonconvex optimization problems where local minima are separated by high barriers. In this paper, we explore possible quantum speedups for nonconvex optimization by leveraging the $global$ effe
Externí odkaz:
https://doaj.org/article/0eeef9851ed040caa877ffb388650a86
Autor:
Jiang, Bowen, Xie, Yangxinyu, Hao, Zhuoqun, Wang, Xiaomeng, Mallick, Tanwi, Su, Weijie J., Taylor, Camillo J., Roth, Dan
This study introduces a hypothesis-testing framework to assess whether large language models (LLMs) possess genuine reasoning abilities or primarily depend on token bias. We go beyond evaluating LLMs on accuracy; rather, we aim to investigate their t
Externí odkaz:
http://arxiv.org/abs/2406.11050
Training Deep Neural Networks (DNNs) with adversarial examples often results in poor generalization to test-time adversarial data. This paper investigates this issue, known as adversarially robust generalization, through the lens of Rademacher comple
Externí odkaz:
http://arxiv.org/abs/2406.05372
Autor:
Chiba-Okabe, Hiroaki, Su, Weijie J.
The rapid progress of generative AI technology has sparked significant copyright concerns, leading to numerous lawsuits filed against AI developers. While various techniques for mitigating copyright issues have been studied, significant risks remain.
Externí odkaz:
http://arxiv.org/abs/2406.03341
Autor:
Jiang, Bowen, Xie, Yangxinyu, Wang, Xiaomeng, Su, Weijie J., Taylor, Camillo J., Mallick, Tanwi
Rationality is the quality of being guided by reason, characterized by logical thinking and decision-making that align with evidence and logical rules. This quality is essential for effective problem-solving, as it ensures that solutions are well-fou
Externí odkaz:
http://arxiv.org/abs/2406.00252
Autor:
Qi, Xiangyu, Huang, Yangsibo, Zeng, Yi, Debenedetti, Edoardo, Geiping, Jonas, He, Luxi, Huang, Kaixuan, Madhushani, Udari, Sehwag, Vikash, Shi, Weijia, Wei, Boyi, Xie, Tinghao, Chen, Danqi, Chen, Pin-Yu, Ding, Jeffrey, Jia, Ruoxi, Ma, Jiaqi, Narayanan, Arvind, Su, Weijie J, Wang, Mengdi, Xiao, Chaowei, Li, Bo, Song, Dawn, Henderson, Peter, Mittal, Prateek
The exposure of security vulnerabilities in safety-aligned language models, e.g., susceptibility to adversarial attacks, has shed light on the intricate interplay between AI safety and AI security. Although the two disciplines now come together under
Externí odkaz:
http://arxiv.org/abs/2405.19524
Accurately aligning large language models (LLMs) with human preferences is crucial for informing fair, economically sound, and statistically efficient decision-making processes. However, we argue that reinforcement learning from human feedback (RLHF)
Externí odkaz:
http://arxiv.org/abs/2405.16455
A recent study by De et al. (2022) has reported that large-scale representation learning through pre-training on a public dataset significantly enhances differentially private (DP) learning in downstream tasks, despite the high dimensionality of the
Externí odkaz:
http://arxiv.org/abs/2405.08920