Zobrazeno 1 - 10
of 779
pro vyhledávání: '"Wang Jindong"'
Publikováno v:
Jixie qiangdu, Pp 527-533 (2023)
Aiming at the mutation in demodulation of the envelope estimation function, a local mean decomposition method for singular interval envelope reconstruction is proposed. This method determines that the reason for the sudden change in the demodulation
Externí odkaz:
https://doaj.org/article/79ddc30e4f7341efafa8af9768052de3
Autor:
Zhu, Tingyuan, Liu, Shudong, Wang, Yidong, Wong, Derek F., Yu, Han, Shinozaki, Takahiro, Wang, Jindong
Constructing high-quality Supervised Fine-Tuning (SFT) datasets is critical for the training of large language models (LLMs). Recent studies have shown that using data from a specific source, Ruozhiba, a Chinese website where users ask "silly" questi
Externí odkaz:
http://arxiv.org/abs/2411.14121
Autor:
Karinshak, Elise, Hu, Amanda, Kong, Kewen, Rao, Vishwanatha, Wang, Jingren, Wang, Jindong, Zeng, Yi
Immense effort has been dedicated to minimizing the presence of harmful or biased generative content and better aligning AI output to human intention; however, research investigating the cultural values of LLMs is still in very early stages. Cultural
Externí odkaz:
http://arxiv.org/abs/2411.06032
Autor:
Huang, Yue, Yuan, Zhengqing, Zhou, Yujun, Guo, Kehan, Wang, Xiangqi, Zhuang, Haomin, Sun, Weixiang, Sun, Lichao, Wang, Jindong, Ye, Yanfang, Zhang, Xiangliang
Large Language Models (LLMs) are increasingly employed for simulations, enabling applications in role-playing agents and Computational Social Science (CSS). However, the reliability of these simulations is under-explored, which raises concerns about
Externí odkaz:
http://arxiv.org/abs/2410.23426
Autor:
Weng, Yixuan, Zhu, Minjun, Bao, Guangsheng, Zhang, Hongbo, Wang, Jindong, Zhang, Yue, Yang, Linyi
The automation of scientific discovery has been a long-standing goal within the research community, driven by the potential to accelerate knowledge creation. While significant progress has been made using commercial large language models (LLMs) as re
Externí odkaz:
http://arxiv.org/abs/2411.00816
Autor:
Chen, Hao, Waheed, Abdul, Li, Xiang, Wang, Yidong, Wang, Jindong, Raj, Bhiksha, Abdin, Marah I.
The rise of Large Language Models (LLMs) has accentuated the need for diverse, high-quality pre-training data. Synthetic data emerges as a viable solution to the challenges of data scarcity and inaccessibility. While previous literature has focused p
Externí odkaz:
http://arxiv.org/abs/2410.15226
Understanding the creation, evolution, and dissemination of scientific knowledge is crucial for bridging diverse subject areas and addressing complex global challenges such as pandemics, climate change, and ethical AI. Scientometrics, the quantitativ
Externí odkaz:
http://arxiv.org/abs/2410.09510
Mental health disorders are one of the most serious diseases in the world. Most people with such a disease lack access to adequate care, which highlights the importance of training models for the diagnosis and treatment of mental health disorders. Ho
Externí odkaz:
http://arxiv.org/abs/2410.06845
String processing, which mainly involves the analysis and manipulation of strings, is a fundamental component of modern computing. Despite the significant advancements of large language models (LLMs) in various natural language processing (NLP) tasks
Externí odkaz:
http://arxiv.org/abs/2410.01208
Autor:
Xu, Yijiang, Jia, Hongrui, Chen, Liguo, Wang, Xin, Zeng, Zhengran, Wang, Yidong, Gao, Qing, Wang, Jindong, Ye, Wei, Zhang, Shikun, Wu, Zhonghai
Fuzz testing is crucial for identifying software vulnerabilities, with coverage-guided grey-box fuzzers like AFL and Angora excelling in broad detection. However, as the need for targeted detection grows, directed grey-box fuzzing (DGF) has become es
Externí odkaz:
http://arxiv.org/abs/2409.14329