Zobrazeno 1 - 10
of 76 889
pro vyhledávání: '"YANG, LIU"'
Autor:
Yang, Liu, Paischer, Fabian, Hassani, Kaveh, Li, Jiacheng, Shao, Shuai, Li, Zhang Gabriel, He, Yun, Feng, Xue, Noorshams, Nima, Park, Sem, Long, Bo, Nowak, Robert D, Gao, Xiaoli, Eghbalzadeh, Hamid
Sequential dense retrieval models utilize advanced sequence learning techniques to compute item and user representations, which are then used to rank relevant items for a user through inner product computation between the user and all item representa
Externí odkaz:
http://arxiv.org/abs/2411.18814
In-Context Operator Networks (ICONs) are models that learn operators across different types of PDEs using a few-shot, in-context approach. Although they show successful generalization to various PDEs, existing methods treat each data point as a singl
Externí odkaz:
http://arxiv.org/abs/2411.16063
We introduce Xmodel-1.5, a 1-billion-parameter multilingual large language model pretrained on 2 trillion tokens, designed for balanced performance and scalability. Unlike most large models that use the BPE tokenizer, Xmodel-1.5 employs a custom unig
Externí odkaz:
http://arxiv.org/abs/2411.10083
Topological phases in frustrated quantum magnetic systems have captivated researchers for decades, with the chiral spin liquid (CSL) standing out as one of the most compelling examples. Featured by long-range entanglement, topological order, and exot
Externí odkaz:
http://arxiv.org/abs/2411.08121
Radio Frequency Fingerprint Identification (RFFI) technology uniquely identifies emitters by analyzing unique distortions in the transmitted signal caused by non-ideal hardware. Recently, RFFI based on deep learning methods has gained popularity and
Externí odkaz:
http://arxiv.org/abs/2411.03636
Autor:
He, Yun, Chen, Xuxing, Xu, Jiayi, Cai, Renqin, You, Yiling, Cao, Jennifer, Huang, Minhui, Yang, Liu, Liu, Yiqun, Liu, Xiaoyi, Jin, Rong, Park, Sem, Long, Bo, Feng, Xue
In industrial recommendation systems, multi-task learning (learning multiple tasks simultaneously on a single model) is a predominant approach to save training/serving resources and improve recommendation performance via knowledge transfer between th
Externí odkaz:
http://arxiv.org/abs/2411.11871
Autor:
Tian, Felix, Byadgi, Ajay, Kim, Daniel, Zha, Daochen, White, Matt, Xiao, Kairong, Yanglet, Xiao-Yang Liu
Publikováno v:
5th ACM International Conference on AI in Finance, 2024
Current large language models (LLMs) have proven useful for analyzing financial data, but most existing models, such as BloombergGPT and FinGPT, lack customization for specific user needs. In this paper, we address this gap by developing FinGPT Searc
Externí odkaz:
http://arxiv.org/abs/2410.15284
Autor:
Xiong, Zheyang, Cai, Ziyang, Cooper, John, Ge, Albert, Papageorgiou, Vasilis, Sifakis, Zack, Giannou, Angeliki, Lin, Ziqian, Yang, Liu, Agarwal, Saurabh, Chrysos, Grigorios G, Oymak, Samet, Lee, Kangwook, Papailiopoulos, Dimitris
Large Language Models (LLMs) have demonstrated remarkable in-context learning (ICL) capabilities. In this study, we explore a surprising phenomenon related to ICL: LLMs can perform multiple, computationally distinct ICL tasks simultaneously, during a
Externí odkaz:
http://arxiv.org/abs/2410.05603
In the present paper, a multi-frequency optical non-reciprocal transmission is first realized by using a non-Hermitian multi-mode resonator array.We find that the non-reciprocity can be used to route optical signals, to prevent the reverse flow of no
Externí odkaz:
http://arxiv.org/abs/2409.02000
The pursuit for "ferroelectric metal" which combines seemingly incompatible spontaneous electric polarization and metallicity, has been assiduously ongoing but remains elusive. Unlike traditional ferroelectrics with a wide band gap, ferroelectric (FE
Externí odkaz:
http://arxiv.org/abs/2408.01982