Zobrazeno 1 - 10
of 147
pro vyhledávání: '"Wang, Jixuan"'
Autor:
Feng, Tiantian, Ramakrishna, Anil, Majmudar, Jimit, Peris, Charith, Wang, Jixuan, Chung, Clement, Zemel, Richard, Ziyadi, Morteza, Gupta, Rahul
Federated Learning (FL) is a popular algorithm to train machine learning models on user data constrained to edge devices (for example, mobile phones) due to privacy concerns. Typically, FL is trained with the assumption that no part of the user data
Externí odkaz:
http://arxiv.org/abs/2403.01615
Autor:
Good, Jack, Majmudar, Jimit, Dupuy, Christophe, Wang, Jixuan, Peris, Charith, Chung, Clement, Zemel, Richard, Gupta, Rahul
Continual Federated Learning (CFL) combines Federated Learning (FL), the decentralized learning of a central model on a number of client devices that may not communicate their data, and Continual Learning (CL), the learning of a model from a continua
Externí odkaz:
http://arxiv.org/abs/2310.15054
It is challenging to extract semantic meanings directly from audio signals in spoken language understanding (SLU), due to the lack of textual information. Popular end-to-end (E2E) SLU models utilize sequence-to-sequence automatic speech recognition (
Externí odkaz:
http://arxiv.org/abs/2305.02937
Autor:
Wang, Jixuan, Qiao, Deli
In this paper, the minimization of the weighted sum average age of information (AoI) in a multi-source status update communication system is studied. Multiple independent sources send update packets to a common destination node in a time-slotted mann
Externí odkaz:
http://arxiv.org/abs/2205.03143
Publikováno v:
In Communications in Nonlinear Science and Numerical Simulation October 2024 137
Autor:
Li, Ying, Xu, Haokai, Lan, Xiaozhen, Wang, Jixuan, Su, Xiaoming, Bai, Xiaoping, Via, Brian K., Pei, Zhiyong
Publikováno v:
In Renewable Energy September 2024 230
As large and powerful neural language models are developed, researchers have been increasingly interested in developing diagnostic tools to probe them. There are many papers with conclusions of the form "observation X is found in model Y", using thei
Externí odkaz:
http://arxiv.org/abs/2202.12801
Large pretrained language models (LMs) like BERT have improved performance in many disparate natural language processing (NLP) tasks. However, fine tuning such models requires a large number of training examples for each target task. Simultaneously,
Externí odkaz:
http://arxiv.org/abs/2201.11576
Publikováno v:
In Journal of Water Process Engineering August 2024 65
Autor:
Wang, Jixuan, Wen, Yujing, Wu, Kegui, Ding, Shuai, Liu, Yang, Tian, Hao, Zhang, Jihua, Wang, Liying, Cao, Qingjiao, Zhang, Yunxin
Publikováno v:
In Journal of Energy Storage 15 July 2024 93