Zobrazeno 1 - 10
of 69
pro vyhledávání: '"Wu, Peihao"'
Speech or text representation generated by pre-trained models contains modal-specific information that could be combined for benefiting spoken language understanding (SLU) tasks. In this work, we propose a novel pre-training paradigm termed Continuou
Externí odkaz:
http://arxiv.org/abs/2305.17499
Autor:
Wang, Bing, Liang, Xinnian, Yang, Jian, Huang, Hui, Wu, Shuangzhi, Wu, Peihao, Lu, Lu, Ma, Zejun, Li, Zhoujun
Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information. To address this limitation, in this paper, we propose the Self-Controlled Memory (SCM) framework to e
Externí odkaz:
http://arxiv.org/abs/2304.13343
ASR model deployment environment is ever-changing, and the incoming speech can be switched across different domains during a session. This brings a challenge for effective domain adaptation when only target domain text data is available, and our obje
Externí odkaz:
http://arxiv.org/abs/2211.00968
Though achieving impressive results on many NLP tasks, the BERT-like masked language models (MLM) encounter the discrepancy between pre-training and inference. In light of this gap, we investigate the contextual representation of pre-training and inf
Externí odkaz:
http://arxiv.org/abs/2205.06603
Autor:
Song, Yuhang, Hou, Jidong, Lyu, Nawei, Luo, Xinyuan, Ma, Jingxuan, Chen, Shuwen, Wu, Peihao, Jiang, Xin, Jin, Yang
Publikováno v:
In Journal of Energy Chemistry March 2024 90:98-109
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
In Journal of Building Engineering 1 December 2023 80
Publikováno v:
In Tuberculosis September 2022 136
Publikováno v:
In Tuberculosis July 2022 135
Autor:
Tian, Xu, Zhang, Jun, Ma, Zejun, He, Yi, Wei, Juan, Wu, Peihao, Situ, Wenchang, Li, Shuai, Zhang, Yang
Recurrent neural networks (RNNs), especially long short-term memory (LSTM) RNNs, are effective network for sequential task like speech recognition. Deeper LSTM models perform well on large vocabulary continuous speech recognition, because of their im
Externí odkaz:
http://arxiv.org/abs/1703.07090