Zobrazeno 1 - 10
of 241
pro vyhledávání: '"Motta, Giovanni"'
Autor:
Zhou, Lillian, Ding, Yuxin, Chen, Mingqing, Zhang, Harry, Prabhavalkar, Rohit, Guliani, Dhruv, Motta, Giovanni, Mathews, Rajiv
Automatic speech recognition (ASR) models are typically trained on large datasets of transcribed speech. As language evolves and new terms come into use, these models can become outdated and stale. In the context of models trained on the server but d
Externí odkaz:
http://arxiv.org/abs/2310.00141
Autor:
Lin, Rongmei, Xiao, Yonghui, Yang, Tien-Ju, Zhao, Ding, Xiong, Li, Motta, Giovanni, Beaufays, Françoise
Automatic Speech Recognition models require large amount of speech data for training, and the collection of such data often leads to privacy concerns. Federated learning has been widely used and is considered to be an effective decentralized techniqu
Externí odkaz:
http://arxiv.org/abs/2209.06359
Autor:
Yang, Tien-Ju, Xiao, Yonghui, Motta, Giovanni, Beaufays, Françoise, Mathews, Rajiv, Chen, Mingqing
This paper addresses the challenges of training large neural network models under federated learning settings: high on-device memory usage and communication cost. The proposed Online Model Compression (OMC) provides a framework that stores model para
Externí odkaz:
http://arxiv.org/abs/2205.03494
Many astrophysical phenomena are time-varying, in the sense that their brightness change over time. In the case of periodic stars, previous approaches assumed that changes in period, amplitude, and phase are well described by either parametric or pie
Externí odkaz:
http://arxiv.org/abs/2111.10264
This paper aims to address the major challenges of Federated Learning (FL) on edge devices: limited memory and expensive communication. We propose a novel method, called Partial Variable Training (PVT), that only trains a small subset of variables on
Externí odkaz:
http://arxiv.org/abs/2110.05607
Transformer-based architectures have been the subject of research aimed at understanding their overparameterization and the non-uniform importance of their layers. Applying these approaches to Automatic Speech Recognition, we demonstrate that the sta
Externí odkaz:
http://arxiv.org/abs/2110.04267
Autor:
Guliani, Dhruv, Zhou, Lillian, Ryu, Changwan, Yang, Tien-Ju, Zhang, Harry, Xiao, Yonghui, Beaufays, Francoise, Motta, Giovanni
Federated learning can be used to train machine learning models on the edge on local data that never leave devices, providing privacy by default. This presents a challenge pertaining to the communication and computation costs associated with clients'
Externí odkaz:
http://arxiv.org/abs/2110.03634
Autor:
Motta, Giovanni
During the last two decades, locally stationary processes have been widely studied in the time series literature. In this paper we consider the locally-stationary vector-auto-regression model of order one, or LS-VAR(1), and estimate its parameters by
Externí odkaz:
http://arxiv.org/abs/2104.11358
Publikováno v:
ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, pp. 3080-3084
We propose using federated learning, a decentralized on-device learning paradigm, to train speech recognition models. By performing epochs of training on a per-user basis, federated learning must incur the cost of dealing with non-IID data distributi
Externí odkaz:
http://arxiv.org/abs/2010.15965
Autor:
Gooneratne, Mary, Sim, Khe Chai, Zadrazil, Petr, Kabel, Andreas, Beaufays, Françoise, Motta, Giovanni
Training machine learning models on mobile devices has the potential of improving both privacy and accuracy of the models. However, one of the major obstacles to achieving this goal is the memory limitation of mobile devices. Reducing training memory
Externí odkaz:
http://arxiv.org/abs/2001.08885