Zobrazeno 1 - 10
of 3 869
pro vyhledávání: '"Fu An Wei"'
NeKo: Toward Post Recognition Generative Correction Large Language Models with Task-Oriented Experts
Autor:
Lin, Yen-Ting, Yang, Chao-Han Huck, Chen, Zhehuai, Zelasko, Piotr, Yang, Xuesong, Chen, Zih-Ching, Puvvada, Krishna C, Fu, Szu-Wei, Hu, Ke, Chiu, Jun Wei, Balam, Jagadeesh, Ginsburg, Boris, Wang, Yu-Chiang Frank
Construction of a general-purpose post-recognition error corrector poses a crucial question: how can we most effectively train a model on a large mixture of domain datasets? The answer would lie in learning dataset-specific features and digesting the
Externí odkaz:
http://arxiv.org/abs/2411.05945
State-of-the-art (SOTA) semi-supervised learning techniques, such as FixMatch and it's variants, have demonstrated impressive performance in classification tasks. However, these methods are not directly applicable to regression tasks. In this paper,
Externí odkaz:
http://arxiv.org/abs/2410.22124
Publikováno v:
Food Technology and Biotechnology, Vol 59, Iss 3, Pp 360-375 (2021)
Research background. Cardiovascular diseases and diabetes are the biggest causes of death globally. Bioactive peptides derived from many food proteins using enzymatic proteolysis and food processing have a positive impact on the prevention of these d
Externí odkaz:
https://doaj.org/article/c3eed9e01b8240eaaed5bfa8f8e2554c
Publikováno v:
CyTA - Journal of Food, Vol 19, Iss 1, Pp 304-315 (2021)
Nutritional composition, functional and antioxidant properties of different parts from grass turtle were investigated. Muscle contained high protein, essential amino acids, and vitamin B6, which were 74.97%, 28363.84, and 14.47 mg/100 g, respectively
Externí odkaz:
https://doaj.org/article/1b018636d34946dcb3e8c5d5f3951a85
Autor:
Lu, Ke-Han, Chen, Zhehuai, Fu, Szu-Wei, Yang, Chao-Han Huck, Balam, Jagadeesh, Ginsburg, Boris, Wang, Yu-Chiang Frank, Lee, Hung-yi
Recent end-to-end speech language models (SLMs) have expanded upon the capabilities of large language models (LLMs) by incorporating pre-trained speech models. However, these SLMs often undergo extensive speech instruction-tuning to bridge the gap be
Externí odkaz:
http://arxiv.org/abs/2409.20007
This paper proposes a generative pretraining foundation model for high-quality speech restoration tasks. By directly operating on complex-valued short-time Fourier transform coefficients, our model does not rely on any vocoders for time-domain signal
Externí odkaz:
http://arxiv.org/abs/2409.16117
Autor:
Huang, Wen-Chin, Fu, Szu-Wei, Cooper, Erica, Zezario, Ryandhimas E., Toda, Tomoki, Wang, Hsin-Min, Yamagishi, Junichi, Tsao, Yu
We present the third edition of the VoiceMOS Challenge, a scientific initiative designed to advance research into automatic prediction of human speech ratings. There were three tracks. The first track was on predicting the quality of ``zoomed-in'' hi
Externí odkaz:
http://arxiv.org/abs/2409.07001
Autor:
Khan, Muhammad Salman, La Quatra, Moreno, Hung, Kuo-Hsuan, Fu, Szu-Wei, Siniscalchi, Sabato Marco, Tsao, Yu
Self-supervised representation learning (SSL) has attained SOTA results on several downstream speech tasks, but SSL-based speech enhancement (SE) solutions still lag behind. To address this issue, we exploit three main ideas: (i) Transformer-based ma
Externí odkaz:
http://arxiv.org/abs/2408.04773
Autor:
Lu, Ke-Han, Chen, Zhehuai, Fu, Szu-Wei, Huang, He, Ginsburg, Boris, Wang, Yu-Chiang Frank, Lee, Hung-yi
Recent speech language models (SLMs) typically incorporate pre-trained speech models to extend the capabilities from large language models (LLMs). In this paper, we propose a Descriptive Speech-Text Alignment approach that leverages speech captioning
Externí odkaz:
http://arxiv.org/abs/2406.18871
Autor:
Chao, Rong, Cheng, Wen-Huang, La Quatra, Moreno, Siniscalchi, Sabato Marco, Yang, Chao-Han Huck, Fu, Szu-Wei, Tsao, Yu
This work aims to study a scalable state-space model (SSM), Mamba, for the speech enhancement (SE) task. We exploit a Mamba-based regression model to characterize speech signals and build an SE system upon Mamba, termed SEMamba. We explore the proper
Externí odkaz:
http://arxiv.org/abs/2405.06573