TouchASP: Elastic Automatic Speech Perception that Everyone Can Touch
Autor: | Song, Xingchen, Liang, Chengdong, Zhang, Binbin, Zhang, Pengshen, Wang, ZiYu, Ma, Youcheng, Xu, Menglong, Wang, Lin, Wu, Di, Pan, Fuping, Zhou, Dinghao, Peng, Zhendong |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Large Automatic Speech Recognition (ASR) models demand a vast number of parameters, copious amounts of data, and significant computational resources during the training process. However, such models can merely be deployed on high-compute cloud platforms and are only capable of performing speech recognition tasks. This leads to high costs and restricted capabilities. In this report, we initially propose the elastic mixture of the expert (eMoE) model. This model can be trained just once and then be elastically scaled in accordance with deployment requirements. Secondly, we devise an unsupervised data creation and validation procedure and gather millions of hours of audio data from diverse domains for training. Using these two techniques, our system achieves elastic deployment capabilities while reducing the Character Error Rate (CER) on the SpeechIO testsets from 4.98\% to 2.45\%. Thirdly, our model is not only competent in Mandarin speech recognition but also proficient in multilingual, multi-dialect, emotion, gender, and sound event perception. We refer to this as Automatic Speech Perception (ASP), and the perception results are presented in the experimental section. Comment: Technical Report |
Databáze: | arXiv |
Externí odkaz: |