Zobrazeno 1 - 10
of 30
pro vyhledávání: '"Wang, Quandong"'
Autor:
Wang, Quandong, Yuan, Yuxuan, Yang, Xiaoyu, Zhang, Ruike, Zhao, Kang, Liu, Wei, Luan, Jian, Povey, Daniel, Wang, Bin
While Large Language Models (LLMs) have achieved remarkable success in various fields, the efficiency of training and inference remains a major challenge. To address this issue, we propose SUBLLM, short for Subsampling-Upsampling-Bypass Large Languag
Externí odkaz:
http://arxiv.org/abs/2406.06571
In this paper, we investigate representation learning for low-resource keyword spotting (KWS). The main challenges of KWS are limited labeled data and limited available device resources. To address those challenges, we explore representation learning
Externí odkaz:
http://arxiv.org/abs/2303.10912
Autor:
Guo, Liyong, Yang, Xiaoyu, Wang, Quandong, Kong, Yuxiang, Yao, Zengwei, Cui, Fan, Kuang, Fangjun, Kang, Wei, Lin, Long, Luo, Mingshuang, Zelasko, Piotr, Povey, Daniel
Knowledge distillation(KD) is a common approach to improve model performance in automatic speech recognition (ASR), where a student model is trained to imitate the output behaviour of a teacher model. However, traditional KD methods suffer from teach
Externí odkaz:
http://arxiv.org/abs/2211.00508
Autor:
Wang, Quandong, Wu, Junnan, Yan, Zhao, Qian, Sichong, Guo, Liyong, Fan, Lichun, Zhuang, Weiji, Gao, Peng, Wang, Yujun
We propose a multi-channel speech enhancement approach with a novel two-stage feature fusion method and a pre-trained acoustic model in a multi-task learning paradigm. In the first fusion stage, the time-domain and frequency-domain features are extra
Externí odkaz:
http://arxiv.org/abs/2107.11222
The front-end module in multi-channel automatic speech recognition (ASR) systems mainly use microphone array techniques to produce enhanced signals in noisy conditions with reverberation and echos. Recently, neural network (NN) based front-end has sh
Externí odkaz:
http://arxiv.org/abs/2011.09081
Publikováno v:
Kongzhi Yu Xinxi Jishu, Iss 1, Pp 64-70 (2022)
Insufficient data, lack of expertise in AI application development, and weak device computing capabilities have severely restricted the rapid engineering implementation of rail transit intelligent products. In order to solve these problems, this pape
Externí odkaz:
https://doaj.org/article/b3d78e6369ae41f9845e4d2373ad05df
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
In Applied Soft Computing Journal June 2018 67:350-369
Publikováno v:
In Information Sciences May 2018 442-443:54-71
Autor:
Guo, Liyong, Yang, Xiaoyu, Wang, Quandong, Kong, Yuxiang, Yao, Zengwei, Cui, Fan, Kuang, Fangjun, Kang, Wei, Lin, Long, Luo, Mingshuang, Zelasko, Piotr, Povey, Daniel
Publikováno v:
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Knowledge distillation(KD) is a common approach to improve model performance in automatic speech recognition (ASR), where a student model is trained to imitate the output behaviour of a teacher model. However, traditional KD methods suffer from teach