Zobrazeno 1 - 10
of 996
pro vyhledávání: '"Lin, Yu Chen"'
Prompt Tuning has been a popular Parameter-Efficient Fine-Tuning method attributed to its remarkable performance with few updated parameters on various large-scale pretrained Language Models (PLMs). Traditionally, each prompt has been considered indi
Externí odkaz:
http://arxiv.org/abs/2410.12847
Autor:
Hong, Zong-Wei, Lin, Yu-Chen
The domain of computer vision has experienced significant advancements in facial-landmark detection, becoming increasingly essential across various applications such as augmented reality, facial recognition, and emotion analysis. Unlike object detect
Externí odkaz:
http://arxiv.org/abs/2404.06029
Autor:
Lin, Yu Chen
Adults with autism spectrum disorder (ASD) and intellectual disability (ID) are the majority of population in residential settings. Many clients in residential settings engage in problem behavior that interferes with their daily routine and work requ
Externí odkaz:
https://digital.library.unt.edu/ark:/67531/metadc2257671/
Autor:
Lin, Yu-Chen, Kumar, Akhilesh, Chang, Norman, Zhang, Wenliang, Zakir, Muhammad, Apte, Rucha, He, Haiyang, Wang, Chao, Jang, Jyh-Shing Roger
We present four main contributions to enhance the performance of Large Language Models (LLMs) in generating domain-specific code: (i) utilizing LLM-based data splitting and data renovation techniques to improve the semantic representation of embeddin
Externí odkaz:
http://arxiv.org/abs/2311.16267
The tasks of automatic lyrics transcription and lyrics alignment have witnessed significant performance improvements in the past few years. However, most of the previous works only focus on English in which large-scale datasets are available. In this
Externí odkaz:
http://arxiv.org/abs/2311.12488
Although face anti-spoofing (FAS) methods have achieved remarkable performance on specific domains or attack types, few studies have focused on the simultaneous presence of domain changes and unknown attacks, which is closer to real application scena
Externí odkaz:
http://arxiv.org/abs/2310.11758
Autor:
Lin-Yu Chen, Yu-Ting Chou, Phui-Ly Liew, Ling-Hui Chu, Kuo-Chang Wen, Shiou-Fu Lin, Yu-Chun Weng, Hui-Chen Wang, Po-Hsuan Su, Hung-Cheng Lai
Publikováno v:
Journal of Ovarian Research, Vol 17, Iss 1, Pp 1-6 (2024)
Externí odkaz:
https://doaj.org/article/60f9ff7b82144ef9bb50abed68080213
Autor:
Lin-Yu Chen, Yu-Ting Chou, Phui-Ly Liew, Ling-Hui Chu, Kuo-Chang Wen, Shiou-Fu Lin, Yu-Chun Weng, Hui-Chen Wang, Po-Hsuan Su, Hung-Cheng Lai
Publikováno v:
Journal of Ovarian Research, Vol 17, Iss 1, Pp 1-12 (2024)
Abstract Background Ovarian cancer is the most lethal gynecological cancer. As the primary treatment, chemotherapy has a response rate of only 60–70% in advanced stages, and even lower as a second-line treatment. Despite guideline recommendations,
Externí odkaz:
https://doaj.org/article/ad9976421da742c6a2aa45c571630064
Autor:
Lin, Kuan-Ting, Hsu, Ting, Aziz, Fahad, Lin, Yu-Chen, Wen, Ping-Yi, Hoi, Io-Chun, Lin, Guin-Dar
This study conducts a theoretical investigation into the signal amplification arising from multiple Rabi sideband coherence within a one-dimensional waveguide quantum electrodynamics system. We utilize a semi-infinite waveguide to drive an anharmonic
Externí odkaz:
http://arxiv.org/abs/2307.11174
Large-scale pre-trained language models such as BERT are popular solutions for text classification. Due to the superior performance of these advanced methods, nowadays, people often directly train them for a few epochs and deploy the obtained model.
Externí odkaz:
http://arxiv.org/abs/2306.07111