Zobrazeno 1 - 10
of 5 026
pro vyhledávání: '"shen, Tao"'
Autor:
Jiang, Zhonghua, Xu, Jimin, Zhang, Shengyu, Shen, Tao, Li, Jiwei, Kuang, Kun, Cai, Haibin, Wu, Fei
Federated learning (FL) is a promising technology for data privacy and distributed optimization, but it suffers from data imbalance and heterogeneity among clients. Existing FL methods try to solve the problems by aligning client with server model or
Externí odkaz:
http://arxiv.org/abs/2412.18904
Remote-sensing mineral exploration is critical for identifying economically viable mineral deposits, yet it poses significant challenges for multimodal large language models (MLLMs). These include limitations in domain-specific geological knowledge a
Externí odkaz:
http://arxiv.org/abs/2412.17339
Large language models (LLMs) have revolutionized natural language processing by achieving state-of-the-art performance across various tasks. Recently, their effectiveness as embedding models has gained attention, marking a paradigm shift from traditi
Externí odkaz:
http://arxiv.org/abs/2412.12591
In psychotherapy, therapeutic outcome assessment, or treatment outcome evaluation, is essential for enhancing mental health care by systematically evaluating therapeutic processes and outcomes. Existing large language model approaches often focus on
Externí odkaz:
http://arxiv.org/abs/2410.05824
Low-Rank Adaptation (LoRA) has emerged as a popular technique for fine-tuning large language models (LLMs) to various domains due to its modular design and widespread availability on platforms like Huggingface. This modularity has sparked interest in
Externí odkaz:
http://arxiv.org/abs/2409.16167
There is a fast-growing literature on estimating optimal treatment rules directly by maximizing the expected outcome. In biomedical studies and operations applications, censored survival outcome is frequently observed, in which case the restricted me
Externí odkaz:
http://arxiv.org/abs/2408.09155
Autor:
Zhao, Ziyu, Gan, Leilei, Wang, Guoyin, Hu, Yuwei, Shen, Tao, Yang, Hongxia, Kuang, Kun, Wu, Fei
Low-Rank Adaptation (LoRA) offers an efficient way to fine-tune large language models (LLMs). Its modular and plug-and-play nature allows the integration of various domain-specific LoRAs, enhancing LLM capabilities. Open-source platforms like Hugging
Externí odkaz:
http://arxiv.org/abs/2406.16989
Autor:
Chen, Qi, Geng, Xiubo, Rosset, Corby, Buractaon, Carolyn, Lu, Jingwen, Shen, Tao, Zhou, Kun, Xiong, Chenyan, Gong, Yeyun, Bennett, Paul, Craswell, Nick, Xie, Xing, Yang, Fan, Tower, Bryan, Rao, Nikhil, Dong, Anlei, Jiang, Wenqi, Liu, Zheng, Li, Mingqin, Liu, Chuanjie, Li, Zengzhong, Majumder, Rangan, Neville, Jennifer, Oakley, Andy, Risvik, Knut Magne, Simhadri, Harsha Vardhan, Varma, Manik, Wang, Yujing, Yang, Linjun, Yang, Mao, Zhang, Ce
Recent breakthroughs in large models have highlighted the critical significance of data scale, labels and modals. In this paper, we introduce MS MARCO Web Search, the first large-scale information-rich web dataset, featuring millions of real clicked
Externí odkaz:
http://arxiv.org/abs/2405.07526
Recently, foundation models, particularly large language models (LLMs), have demonstrated an impressive ability to adapt to various tasks by fine-tuning diverse instruction data. Notably, federated foundation models (FedFM) emerge as a privacy preser
Externí odkaz:
http://arxiv.org/abs/2403.19211
Federated learning (FL) involves multiple heterogeneous clients collaboratively training a global model via iterative local updates and model fusion. The generalization of FL's global model has a large gap compared with centralized training, which is
Externí odkaz:
http://arxiv.org/abs/2402.18949