Zobrazeno 1 - 10
of 416
pro vyhledávání: '"Hsu, Tsu"'
Dense retrieval methods have demonstrated promising performance in multilingual information retrieval, where queries and documents can be in different languages. However, dense retrievers typically require a substantial amount of paired data, which p
Externí odkaz:
http://arxiv.org/abs/2403.03516
Autor:
Kuan, Chun-Yi, Li, Chen An, Hsu, Tsu-Yuan, Lin, Tse-Yang, Chung, Ho-Lam, Chang, Kai-Wei, Chang, Shuo-yiin, Lee, Hung-yi
This paper introduces a novel voice conversion (VC) model, guided by text instructions such as "articulate slowly with a deep tone" or "speak in a cheerful boyish voice". Unlike traditional methods that rely on reference utterances to determine the a
Externí odkaz:
http://arxiv.org/abs/2309.14324
Conversational search provides a natural interface for information retrieval (IR). Recent approaches have demonstrated promising results in applying dense retrieval to conversational IR. However, training dense retrievers requires large amounts of in
Externí odkaz:
http://arxiv.org/abs/2309.06748
Autor:
Fu, Yu-Kuan, Tseng, Liang-Hsuan, Shi, Jiatong, Li, Chen-An, Hsu, Tsu-Yuan, Watanabe, Shinji, Lee, Hung-yi
Most of the speech translation models heavily rely on parallel data, which is hard to collect especially for low-resource languages. To tackle this issue, we propose to build a cascaded speech translation system without leveraging any kind of paired
Externí odkaz:
http://arxiv.org/abs/2305.07455
Autor:
Huang, Kuan-Po, Feng, Tzu-hsun, Fu, Yu-Kuan, Hsu, Tsu-Yuan, Yen, Po-Chieh, Tseng, Wei-Cheng, Chang, Kai-Wei, Lee, Hung-yi
Distilled self-supervised models have shown competitive performance and efficiency in recent years. However, there is a lack of experience in jointly distilling multiple self-supervised speech models. In our work, we performed Ensemble Knowledge Dist
Externí odkaz:
http://arxiv.org/abs/2302.12757
Self-supervised learning (SSL) speech models generate meaningful representations of given clips and achieve incredible performance across various downstream tasks. Model extraction attack (MEA) often refers to an adversary stealing the functionality
Externí odkaz:
http://arxiv.org/abs/2211.16044
Autor:
Huang, Kuan-Po, Fu, Yu-Kuan, Hsu, Tsu-Yuan, Gutierrez, Fabian Ritter, Wang, Fan-Lin, Tseng, Liang-Hsuan, Zhang, Yu, Lee, Hung-yi
Self-supervised learned (SSL) speech pre-trained models perform well across various speech processing tasks. Distilled versions of SSL models have been developed to match the needs of on-device speech applications. Though having similar performance a
Externí odkaz:
http://arxiv.org/abs/2210.07978
Self-supervised learning (SSL) speech models, which can serve as powerful upstream models to extract meaningful speech representations, have achieved unprecedented success in speech representation learning. However, their effectiveness on non-speech
Externí odkaz:
http://arxiv.org/abs/2209.12900
Autor:
Manjunatha, K., Zhang, Hao, Chiu, Hsin-Hao, Ho, Ming-Kang, Hsu, Tsu-En, Yu, Shih-Lung, Chougala, Nilesh, Maruthi, N.S., Kulkarni, Sameer, Cheng, Chia-Liang, Wu, Sheng Yun, Matteppanavar, Shidaling
Publikováno v:
In Journal of Energy Storage 20 September 2024 98 Part B
Autor:
Manjunatha, K., Hsu, Tsu-En, Chiu, Hsin-Hao, Ho, Ming-Kang, Chethan, B., Oliveira, Marisa C., Longo, Elson, Ribeiro, Renan A.P., Yu, Shih-Lung, Cheng, Chia-Liang, Nagabhushana, H., Chen, Meng-Chu, Wu, Sheng Yun
Publikováno v:
In Sensors and Actuators: B. Chemical 1 January 2025 422