Zobrazeno 1 - 10
of 2 628
pro vyhledávání: '"Yan, Ran"'
Training large language model (LLM) is a computationally intensive task, which is typically conducted in data centers with homogeneous high-performance GPUs. This paper explores an alternative approach by deploying the training computation across het
Externí odkaz:
http://arxiv.org/abs/2409.01143
Autor:
Zhang, Shufan, Ma, Minda, Zhou, Nan, Yan, Jinyue, Feng, Wei, Yan, Ran, You, Kairui, Zhang, Jingjing, Ke, Jing
Buildings produce one-third of carbon emissions globally, however, data absence regarding global floorspace poses challenges in advancing building carbon neutrality. We compile the measured building stocks for 14 major economies and apply our global
Externí odkaz:
http://arxiv.org/abs/2406.04074
Federated learning (FL) allows multiple devices to train a model collaboratively without sharing their data. Despite its benefits, FL is vulnerable to privacy leakage and poisoning attacks. To address the privacy concern, secure aggregation (SecAgg)
Externí odkaz:
http://arxiv.org/abs/2405.15182
This study explores the historical emission patterns and decarbonization efforts of China and India, the largest emerging emitters in residential building operations. Using a novel carbon intensity model and structural decomposition approach, it asse
Externí odkaz:
http://arxiv.org/abs/2407.01564
Assessing the emissions of plug-in hybrid electric vehicle (PHEV) operations is crucial for accelerating the carbon-neutral transition in the passenger car sector. This study is the first to adopt a bottom-up model to measure the real-world energy us
Externí odkaz:
http://arxiv.org/abs/2405.07308
State-of-the-art large language models (LLMs) are commonly deployed as online services, necessitating users to transmit informative prompts to cloud servers, thus engendering substantial privacy concerns. In response, we present ConfusionPrompt, a no
Externí odkaz:
http://arxiv.org/abs/2401.00870
Serving generative inference of the large language model is a crucial component of contemporary AI applications. This paper focuses on deploying such services in a heterogeneous and cross-datacenter setting to mitigate the substantial inference costs
Externí odkaz:
http://arxiv.org/abs/2311.11514
Large Language Models (LLMs) excel in natural language understanding by capturing hidden semantics in vector space. This process enriches the value of text embeddings for various downstream tasks, thereby fostering the Embedding-as-a-Service (EaaS) b
Externí odkaz:
http://arxiv.org/abs/2310.09130
Autor:
Song, Yanjie, Wu, Yutong, Guo, Yangyang, Yan, Ran, Suganthan, P. N., Zhang, Yue, Pedrycz, Witold, Das, Swagatam, Mallipeddi, Rammohan, Feng, Oladayo Solomon Ajani. Qiang
Evolutionary algorithms (EA), a class of stochastic search methods based on the principles of natural evolution, have received widespread acclaim for their exceptional performance in various real-world optimization problems. While researchers worldwi
Externí odkaz:
http://arxiv.org/abs/2308.13420