Zobrazeno 1 - 10
of 466
pro vyhledávání: '"Helen Meng"'
Autor:
Thomas K.F. Chiu, Yifan Chen, King Woon Yau, Ching-sing Chai, Helen Meng, Irwin King, Savio Wong, Yeung Yam
Publikováno v:
Computers and Education: Artificial Intelligence, Vol 7, Iss , Pp 100282- (2024)
The majority of AI literacy studies have designed and developed self-reported questionnaires to assess AI learning and understanding. These studies assessed students' perceived AI capability rather than AI literacy because self-perceptions are seldom
Externí odkaz:
https://doaj.org/article/17f01103fd554fb4ad45da29958eb812
Publikováno v:
IEEE Access, Vol 12, Pp 93761-93770 (2024)
Dialogue State Tracking (DST) models often employ intricate neural network architectures, necessitating substantial training data, and their inference process lacks transparency. This paper proposes a method that extracts linguistic knowledge via an
Externí odkaz:
https://doaj.org/article/db24aafed8a749aeb4ebaef73f81fdad
Publikováno v:
APSIPA Transactions on Signal and Information Processing, Vol 13, Iss 2 (2024)
Externí odkaz:
https://doaj.org/article/d1df4d6a860b49c0a4785877ecad4d69
Autor:
Sean Shensheng Xu, Xiaoquan Ke, Man-Wai Mak, Ka Ho Wong, Helen Meng, Timothy C. Y. Kwok, Jason Gu, Jian Zhang, Wei Tao, Chunqi Chang
Publikováno v:
Frontiers in Neuroscience, Vol 17 (2024)
IntroductionSpeaker diarization is an essential preprocessing step for diagnosing cognitive impairments from speech-based Montreal cognitive assessments (MoCA).MethodsThis paper proposes three enhancements to the conventional speaker diarization meth
Externí odkaz:
https://doaj.org/article/89297ecfd0b64ef393a6c2384fba0a71
Publikováno v:
IEEE/ACM Transactions on Audio, Speech, and Language Processing. 31:1811-1824
Publikováno v:
IEEE/ACM Transactions on Audio, Speech, and Language Processing. 31:1024-1036
Autor:
Jiuxin Lin, Xinyu Cai, Heinrich Dinkel, Jun Chen, Zhiyong Yan, Yongqing Wang, Junbo Zhang, Zhiyong Wu, Yujun Wang, Helen Meng
Visual information can serve as an effective cue for target speaker extraction (TSE) and is vital to improving extraction performance. In this paper, we propose AV-SepFormer, a SepFormer-based attention dual-scale model that utilizes cross- and self-
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2dec513510dda6a436e9c30b2e2e4b7e
http://arxiv.org/abs/2306.14170
http://arxiv.org/abs/2306.14170
Publikováno v:
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Publikováno v:
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Publikováno v:
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).