Zobrazeno 1 - 10
of 17
pro vyhledávání: '"Liu, Dancheng"'
Autor:
Qin, Ruiyang, Ren, Pengyu, Yan, Zheyu, Liu, Liu, Liu, Dancheng, Nassereldine, Amir, Xiong, Jinjun, Ni, Kai, Hu, Sharon, Shi, Yiyu
Large Language Models (LLMs) deployed on edge devices, known as edge LLMs, need to continuously fine-tune their model parameters from user-generated data under limited resource constraints. However, most existing learning methods are not applicable f
Externí odkaz:
http://arxiv.org/abs/2411.08244
Autor:
Liu, Dancheng, Yang, Jason, Albrecht-Buehler, Ishan, Qin, Helen, Li, Sophie, Hu, Yuting, Nassereldine, Amir, Xiong, Jinjun
Speech is a fundamental aspect of human life, crucial not only for communication but also for cognitive, social, and academic development. Children with speech disorders (SD) face significant challenges that, if unaddressed, can result in lasting neg
Externí odkaz:
http://arxiv.org/abs/2410.11865
Autor:
Ward, Nigel G., Segura, Andres, Bugarini, Georgina, Lehnert-LeHouillier, Heike, Liu, Dancheng, Xiong, Jinjun, Fuentes, Olac
The diagnosis and treatment of individuals with communication disorders offers many opportunities for the application of speech technology, but research so far has not adequately considered: the diversity of conditions, the role of pragmatic deficits
Externí odkaz:
http://arxiv.org/abs/2409.09170
To address the challenge of automating knowledge discovery from a vast volume of literature, in this paper, we introduce a novel framework based on large language models (LLMs) that combines a progressive ontology prompting (POP) algorithm with a dua
Externí odkaz:
http://arxiv.org/abs/2409.00054
Autor:
Liu, Dancheng, Xiong, Jinjun
Automatic Speech Recognition (ASR) for adults' speeches has made significant progress by employing deep neural network (DNN) models recently, but improvement in children's speech is still unsatisfactory due to children's speech's distinct characteris
Externí odkaz:
http://arxiv.org/abs/2406.17926
Autor:
Liu, Dancheng, Nassereldine, Amir, Yang, Ziming, Xu, Chenhui, Hu, Yuting, Li, Jiajie, Kumar, Utkarsh, Lee, Changjae, Xiong, Jinjun
Large language models (LLMs) have attracted significant attention for their remarkable abilities in various natural language processing tasks, but they suffer from hallucinations that will cause performance degradation. One promising solution to impr
Externí odkaz:
http://arxiv.org/abs/2406.15673
As edge-based automatic speech recognition (ASR) technologies become increasingly prevalent for the development of intelligent and personalized assistants, three important challenges must be addressed for these resource-constrained ASR models, i.e.,
Externí odkaz:
http://arxiv.org/abs/2406.15668
Autor:
Qin, Ruiyang, Liu, Dancheng, Xu, Chenhui, Yan, Zheyu, Tan, Zhaoxuan, Jia, Zhenge, Nassereldine, Amir, Li, Jiajie, Jiang, Meng, Abbasi, Ahmed, Xiong, Jinjun, Shi, Yiyu
The scaling laws have become the de facto guidelines for designing large language models (LLMs), but they were studied under the assumption of unlimited computing resources for both training and inference. As LLMs are increasingly used as personalize
Externí odkaz:
http://arxiv.org/abs/2406.03777
Autor:
Qin, Ruiyang, Yan, Zheyu, Zeng, Dewen, Jia, Zhenge, Liu, Dancheng, Liu, Jianbo, Zheng, Zhi, Cao, Ningyuan, Ni, Kai, Xiong, Jinjun, Shi, Yiyu
Large Language Models (LLMs) deployed on edge devices learn through fine-tuning and updating a certain portion of their parameters. Although such learning methods can be optimized to reduce resource utilization, the overall required resources remain
Externí odkaz:
http://arxiv.org/abs/2405.04700
Autor:
Liu, Dancheng, Xiong, Jinjun
Deep learning models have exhibited remarkable performance across various domains. Nevertheless, the burgeoning model sizes compel edge devices to offload a significant portion of the inference process to the cloud. While this practice offers numerou
Externí odkaz:
http://arxiv.org/abs/2401.10859