Zobrazeno 1 - 10
of 487
pro vyhledávání: '"Wang, Junlin"'
In this paper, we study second-order algorithms for the convex-concave minimax problem, which has attracted much attention in many fields such as machine learning in recent years. We propose a Lipschitz-free cubic regularization (LF-CR) algorithm for
Externí odkaz:
http://arxiv.org/abs/2407.03571
Autor:
Xie, Roy, Wang, Junlin, Huang, Ruomin, Zhang, Minxing, Ge, Rong, Pei, Jian, Gong, Neil Zhenqiang, Dhingra, Bhuwan
The rapid scaling of large language models (LLMs) has raised concerns about the transparency and fair use of the pretraining data used for training them. Detecting such content is challenging due to the scale of the data and limited exposure of each
Externí odkaz:
http://arxiv.org/abs/2406.15968
Generation and control of entanglement are fundamental tasks in quantum information processing. In this paper, we propose a novel approach to generate controllable frequency-entangled photons by using the concept of synthetic frequency dimension in a
Externí odkaz:
http://arxiv.org/abs/2406.07346
With the proliferation of LLM-integrated applications such as GPT-s, millions are deployed, offering valuable services through proprietary instruction prompts. These systems, however, are prone to prompt extraction attacks through meticulously design
Externí odkaz:
http://arxiv.org/abs/2406.06737
Autor:
Wang, Junlin, Jain, Siddhartha, Zhang, Dejiao, Ray, Baishakhi, Kumar, Varun, Athiwaratkun, Ben
A diverse array of reasoning strategies has been proposed to elicit the capabilities of large language models. However, in this paper, we point out that traditional evaluations which focus solely on performance metrics miss a key factor: the increase
Externí odkaz:
http://arxiv.org/abs/2406.06461
Recent advances in large language models (LLMs) demonstrate substantial capabilities in natural language understanding and generation tasks. With the growing number of LLMs, how to harness the collective expertise of multiple LLMs is an exciting open
Externí odkaz:
http://arxiv.org/abs/2406.04692
Autor:
Yang, Hongyang, Zhang, Boyu, Wang, Neng, Guo, Cheng, Zhang, Xiaoli, Lin, Likun, Wang, Junlin, Zhou, Tianyu, Guan, Mao, Zhang, Runjia, Wang, Christina Dan
As financial institutions and professionals increasingly incorporate Large Language Models (LLMs) into their workflows, substantial barriers, including proprietary data and specialized knowledge, persist between the finance sector and the AI communit
Externí odkaz:
http://arxiv.org/abs/2405.14767
Large language models (LLMs) have significantly transformed the educational landscape. As current plagiarism detection tools struggle to keep pace with LLMs' rapid advancements, the educational community faces the challenge of assessing students' tru
Externí odkaz:
http://arxiv.org/abs/2402.17916
Quantum many-body scar is a recently discovered phenomenon weakly violating eigenstate thermalization hypothesis, and it has been extensively studied across various models. However, experimental realizations are mainly based on constrained models suc
Externí odkaz:
http://arxiv.org/abs/2307.13297
Autor:
Geleta, Margarita, Xu, Jiacen, Loya, Manikanta, Wang, Junlin, Singh, Sameer, Li, Zhou, Gago-Masague, Sergio
Although the prevention of AI vulnerabilities is critical to preserve the safety and privacy of users and businesses, educational tools for robust AI are still underdeveloped worldwide. We present the design, implementation, and assessment of Maestro
Externí odkaz:
http://arxiv.org/abs/2306.08238