Zobrazeno 1 - 10
of 3 232
pro vyhledávání: '"Chuang, Yu"'
Autor:
Chuang, Yu-Neng, Zhou, Helen, Sarma, Prathusha Kameswara, Gopalan, Parikshit, Boccio, John, Bolouki, Sara, Hu, Xia
Large language models (LLMs) have demonstrated impressive performance on several tasks and are increasingly deployed in real-world applications. However, especially in high-stakes settings, it becomes vital to know when the output of an LLM may be un
Externí odkaz:
http://arxiv.org/abs/2410.13284
Autor:
Wang, Guanchu, Chuang, Yu-Neng, Tang, Ruixiang, Zhong, Shaochen, Yuan, Jiayi, Jin, Hongye, Liu, Zirui, Chaudhary, Vipin, Xu, Shuai, Caverlee, James, Hu, Xia
Ensuring the security of released large language models (LLMs) poses a significant dilemma, as existing mechanisms either compromise ownership rights or raise data privacy concerns. To address this dilemma, we introduce TaylorMLP to protect the owner
Externí odkaz:
http://arxiv.org/abs/2410.05331
In Standard Chinese, Tone 3 (the dipping tone) becomes Tone 2 (rising tone) when followed by another Tone 3. Previous studies have noted that this sandhi process may be incomplete, in the sense that the assimilated Tone 3 is still distinct from a tru
Externí odkaz:
http://arxiv.org/abs/2408.15747
Autor:
Wang, Yicheng, Yuan, Jiayi, Chuang, Yu-Neng, Wang, Zhuoer, Liu, Yingchi, Cusick, Mark, Kulkarni, Param, Ji, Zhengping, Ibrahim, Yasser, Hu, Xia
Large Language Models (LLMs) are increasingly serving as evaluators in Natural Language Generation (NLG) tasks. However, the capabilities of LLMs in scoring NLG quality remain inadequately explored. Current studies depend on human assessments and sim
Externí odkaz:
http://arxiv.org/abs/2408.13704
Autor:
Wang, Guanchu, Ran, Junhao, Tang, Ruixiang, Chang, Chia-Yuan, Chuang, Yu-Neng, Liu, Zirui, Braverman, Vladimir, Liu, Zhandong, Hu, Xia
Despite the impressive capabilities of Large Language Models (LLMs) in general medical domains, questions remain about their performance in diagnosing rare diseases. To answer this question, we aim to assess the diagnostic performance of LLMs in rare
Externí odkaz:
http://arxiv.org/abs/2408.08422
Autor:
Yuan, Jiayi, Liu, Hongyi, Zhong, Shaochen, Chuang, Yu-Neng, Li, Songchen, Wang, Guanchu, Le, Duy, Jin, Hongye, Chaudhary, Vipin, Xu, Zhaozhuo, Liu, Zirui, Hu, Xia
Long context capability is a crucial competency for large language models (LLMs) as it mitigates the human struggle to digest long-form texts. This capability enables complex task-solving scenarios such as book summarization, code assistance, and man
Externí odkaz:
http://arxiv.org/abs/2407.01527
Autor:
Chuang, Yu-Neng, Li, Songchen, Yuan, Jiayi, Wang, Guanchu, Lai, Kwei-Herng, Yu, Leisheng, Ding, Sirui, Chang, Chia-Yuan, Tan, Qiaoyu, Zha, Daochen, Hu, Xia
Inspired by Large Language Models (LLMs), Time Series Forecasting (TSF), a long-standing task in time series analysis, is undergoing a transition towards Large Time Series Models (LTSMs), aiming to train universal transformer-based models for TSF. Ho
Externí odkaz:
http://arxiv.org/abs/2406.14045
Foundation Models (FMs) serve as a general class for the development of artificial intelligence systems, offering broad potential for generalization across a spectrum of downstream tasks. Despite extensive research into self-supervised learning as th
Externí odkaz:
http://arxiv.org/abs/2406.08310
The pitch contours of Mandarin two-character words are generally understood as being shaped by the underlying tones of the constituent single-character words, in interaction with articulatory constraints imposed by factors such as speech rate, co-art
Externí odkaz:
http://arxiv.org/abs/2405.07006
Investigating differences in lab-quality and remote recording methods with dynamic acoustic measures
Publikováno v:
Laboratory Phonology 2024 15(1)
Increasingly, phonetic research utilizes data collected from participants who record themselves on readily available devices. Though such recordings are convenient, their suitability for acoustic analysis remains an open question, especially regardin
Externí odkaz:
http://arxiv.org/abs/2404.17022