Zobrazeno 1 - 10
of 870
pro vyhledávání: '"Yu Neng"'
Autor:
Wang, Yu-Neng, Achour, Sara
As the demand for efficient data processing escalates, reconfigurable analog hardware which implements novel analog compute paradigms, is promising for energy-efficient computing at the sensing and actuation boundaries. These analog computing platfor
Externí odkaz:
http://arxiv.org/abs/2411.03557
Autor:
Chuang, Yu-Neng, Zhou, Helen, Sarma, Prathusha Kameswara, Gopalan, Parikshit, Boccio, John, Bolouki, Sara, Hu, Xia
Large language models (LLMs) have demonstrated impressive performance on several tasks and are increasingly deployed in real-world applications. However, especially in high-stakes settings, it becomes vital to know when the output of an LLM may be un
Externí odkaz:
http://arxiv.org/abs/2410.13284
Autor:
Wang, Guanchu, Chuang, Yu-Neng, Tang, Ruixiang, Zhong, Shaochen, Yuan, Jiayi, Jin, Hongye, Liu, Zirui, Chaudhary, Vipin, Xu, Shuai, Caverlee, James, Hu, Xia
Ensuring the security of released large language models (LLMs) poses a significant dilemma, as existing mechanisms either compromise ownership rights or raise data privacy concerns. To address this dilemma, we introduce TaylorMLP to protect the owner
Externí odkaz:
http://arxiv.org/abs/2410.05331
Autor:
Wang, Yicheng, Yuan, Jiayi, Chuang, Yu-Neng, Wang, Zhuoer, Liu, Yingchi, Cusick, Mark, Kulkarni, Param, Ji, Zhengping, Ibrahim, Yasser, Hu, Xia
Large Language Models (LLMs) are increasingly serving as evaluators in Natural Language Generation (NLG) tasks. However, the capabilities of LLMs in scoring NLG quality remain inadequately explored. Current studies depend on human assessments and sim
Externí odkaz:
http://arxiv.org/abs/2408.13704
Autor:
Wang, Guanchu, Ran, Junhao, Tang, Ruixiang, Chang, Chia-Yuan, Chuang, Yu-Neng, Liu, Zirui, Braverman, Vladimir, Liu, Zhandong, Hu, Xia
Despite the impressive capabilities of Large Language Models (LLMs) in general medical domains, questions remain about their performance in diagnosing rare diseases. To answer this question, we aim to assess the diagnostic performance of LLMs in rare
Externí odkaz:
http://arxiv.org/abs/2408.08422
Autor:
Yuan, Jiayi, Liu, Hongyi, Zhong, Shaochen, Chuang, Yu-Neng, Li, Songchen, Wang, Guanchu, Le, Duy, Jin, Hongye, Chaudhary, Vipin, Xu, Zhaozhuo, Liu, Zirui, Hu, Xia
Long context capability is a crucial competency for large language models (LLMs) as it mitigates the human struggle to digest long-form texts. This capability enables complex task-solving scenarios such as book summarization, code assistance, and man
Externí odkaz:
http://arxiv.org/abs/2407.01527
Autor:
Chuang, Yu-Neng, Li, Songchen, Yuan, Jiayi, Wang, Guanchu, Lai, Kwei-Herng, Yu, Leisheng, Ding, Sirui, Chang, Chia-Yuan, Tan, Qiaoyu, Zha, Daochen, Hu, Xia
Inspired by Large Language Models (LLMs), Time Series Forecasting (TSF), a long-standing task in time series analysis, is undergoing a transition towards Large Time Series Models (LTSMs), aiming to train universal transformer-based models for TSF. Ho
Externí odkaz:
http://arxiv.org/abs/2406.14045
Foundation Models (FMs) serve as a general class for the development of artificial intelligence systems, offering broad potential for generalization across a spectrum of downstream tasks. Despite extensive research into self-supervised learning as th
Externí odkaz:
http://arxiv.org/abs/2406.08310
Autor:
Liu, Hongyi, Liu, Zirui, Tang, Ruixiang, Yuan, Jiayi, Zhong, Shaochen, Chuang, Yu-Neng, Li, Li, Chen, Rui, Hu, Xia
Fine-tuning LLMs is crucial to enhancing their task-specific performance and ensuring model behaviors are aligned with human preferences. Among various fine-tuning methods, LoRA is popular for its efficiency and ease to use, allowing end-users to eas
Externí odkaz:
http://arxiv.org/abs/2403.00108
Large language models (LLMs) are great at processing multiple natural language processing tasks, but their abilities are constrained by inferior performance with long context, slow inference speed, and the high cost of computing the results. Deployin
Externí odkaz:
http://arxiv.org/abs/2402.18700