Zobrazeno 1 - 10
of 816
pro vyhledávání: '"WU Yuhan"'
Autor:
Chen, Shimao, Liu, Zirui, Wu, Zhiying, Zheng, Ce, Cong, Peizhuang, Jiang, Zihan, Wu, Yuhan, Su, Lei, Yang, Tong
As the foundation of large language models (LLMs), self-attention module faces the challenge of quadratic time and memory complexity with respect to sequence length. FlashAttention accelerates attention computation and reduces its memory usage by lev
Externí odkaz:
http://arxiv.org/abs/2409.16997
Large language models (LLMs) have achieved impressive performance on code generation. Although prior studies enhanced LLMs with prompting techniques and code refinement, they still struggle with complex programming problems due to rigid solution plan
Externí odkaz:
http://arxiv.org/abs/2409.05001
We study the continuous-time pre-commitment mean-variance portfolio selection in a time-varying financial market. By introducing two indexes which respectively express the average profitability of the risky asset (AP) and the current profitability of
Externí odkaz:
http://arxiv.org/abs/2408.07969
Existing domain generalization (DG) methods for cross-person generalization tasks often face challenges in capturing intra- and inter-domain style diversity, resulting in domain gaps with the target domain. In this study, we explore a novel perspecti
Externí odkaz:
http://arxiv.org/abs/2406.04609
Autor:
Wu, Yuhan, Wu, Hanbo, Liu, Xilai, Zhao, Yikai, Yang, Tong, Yang, Kaicheng, Wang, Sha, Miao, Lihua, Xie, Gaogang
To approximate sums of values in key-value data streams, sketches are widely used in databases and networking systems. They offer high-confidence approximations for any given key while ensuring low time and space overhead. While existing sketches are
Externí odkaz:
http://arxiv.org/abs/2406.00376
Recent years have witnessed the deployment of code language models (LMs) in various code intelligence tasks such as code completion. Yet, it is challenging for pre-trained LMs to generate correct completions in private repositories. Previous studies
Externí odkaz:
http://arxiv.org/abs/2405.19782
The challenge of estimating similarity between sets has been a significant concern in data science, finding diverse applications across various domains. However, previous approaches, such as MinHash, have predominantly centered around hashing techniq
Externí odkaz:
http://arxiv.org/abs/2405.19711
Autor:
Li, Hanlong, Wang, Pei, Wu, Yuhan, Ren, Jing, Gao, Yuhang, Zhang, Lingyun, Zhang, Mingtai, Chen, Wenxin
Wood-leaf classification is an essential and fundamental prerequisite in the analysis and estimation of forest attributes from terrestrial laser scanning (TLS) point clouds,including critical measurements such as diameter at breast height(DBH),above-
Externí odkaz:
http://arxiv.org/abs/2405.18737
While one-dimensional convolutional neural networks (1D-CNNs) have been empirically proven effective in time series classification tasks, we find that there remain undesirable outcomes that could arise in their application, motivating us to further i
Externí odkaz:
http://arxiv.org/abs/2310.05467
Autor:
Wang, Zekun Moore, Peng, Zhongyuan, Que, Haoran, Liu, Jiaheng, Zhou, Wangchunshu, Wu, Yuhan, Guo, Hongcheng, Gan, Ruitong, Ni, Zehao, Yang, Jian, Zhang, Man, Zhang, Zhaoxiang, Ouyang, Wanli, Xu, Ke, Huang, Stephen W., Fu, Jie, Peng, Junran
The advent of Large Language Models (LLMs) has paved the way for complex tasks such as role-playing, which enhances user interactions by enabling models to imitate various characters. However, the closed-source nature of state-of-the-art LLMs and the
Externí odkaz:
http://arxiv.org/abs/2310.00746