Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Yu, Zhuohao"'
Autor:
Huang, Xinmei, Li, Haoyang, Zhang, Jing, Zhao, Xinxin, Yao, Zhiming, Li, Yiyan, Yu, Zhuohao, Zhang, Tieying, Chen, Hong, Li, Cuiping
Database knob tuning is a critical challenge in the database community, aiming to optimize knob values to enhance database performance for specific workloads. DBMS often feature hundreds of tunable knobs, posing a significant challenge for DBAs to re
Externí odkaz:
http://arxiv.org/abs/2404.11581
Autor:
Yu, Zhuohao, Gao, Chang, Yao, Wenjin, Wang, Yidong, Zeng, Zhengran, Ye, Wei, Wang, Jindong, Zhang, Yue, Zhang, Shikun
The rapid development of large language model (LLM) evaluation methodologies and datasets has led to a profound challenge: integrating state-of-the-art evaluation techniques cost-effectively while ensuring reliability, reproducibility, and efficiency
Externí odkaz:
http://arxiv.org/abs/2404.06003
Code large language models mark a pivotal breakthrough in artificial intelligence. They are specifically crafted to understand and generate programming languages, significantly boosting the efficiency of coding development workflows. In this technica
Externí odkaz:
http://arxiv.org/abs/2403.15747
Autor:
Yu, Zhuohao, Gao, Chang, Yao, Wenjin, Wang, Yidong, Ye, Wei, Wang, Jindong, Xie, Xing, Zhang, Yue, Zhang, Shikun
Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness. Existing strategies, which aim to detect contaminated texts, focus on quantifying contamination
Externí odkaz:
http://arxiv.org/abs/2402.15043
Autor:
Yang, Linyi, Zhang, Shuibai, Yu, Zhuohao, Bao, Guangsheng, Wang, Yidong, Wang, Jindong, Xu, Ruochen, Ye, Wei, Xie, Xing, Chen, Weizhu, Zhang, Yue
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering. The recent progress in large-scale generative models has further expanded their use in real-world language applications. However, the critical cha
Externí odkaz:
http://arxiv.org/abs/2312.15918
Autor:
Mu, Fangwen, Shi, Lin, Wang, Song, Yu, Zhuohao, Zhang, Binquan, Wang, Chenxue, Liu, Shichao, Wang, Qing
We introduce a novel framework named ClarifyGPT, which aims to enhance code generation by empowering LLMs with the ability to identify ambiguous requirements and ask targeted clarifying questions. In particular, ClarifyGPT first detects whether a giv
Externí odkaz:
http://arxiv.org/abs/2310.10996
Autor:
Wang, Yidong, Yu, Zhuohao, Zeng, Zhengran, Yang, Linyi, Wang, Cunxiang, Chen, Hao, Jiang, Chaoya, Xie, Rui, Wang, Jindong, Xie, Xing, Ye, Wei, Zhang, Shikun, Zhang, Yue
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust
Externí odkaz:
http://arxiv.org/abs/2306.05087
Autor:
Wang, Yidong, Yu, Zhuohao, Wang, Jindong, Heng, Qiang, Chen, Hao, Ye, Wei, Xie, Rui, Xie, Xing, Zhang, Shikun
Vision-Language models (VLMs) that use contrastive language-image pre-training have shown promising zero-shot classification performance. However, their performance on imbalanced dataset is relatively poor, where the distribution of classes in the tr
Externí odkaz:
http://arxiv.org/abs/2304.01457
Autor:
Tang, Tianyi, Li, Junyi, Chen, Zhipeng, Hu, Yiwen, Yu, Zhuohao, Dai, Wenxun, Dong, Zican, Cheng, Xiaoxue, Wang, Yuhao, Zhao, Wayne Xin, Nie, Jian-Yun, Wen, Ji-Rong
To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2.0, focusing on the use of pre-trained language models (PLMs). To be comprehensive, our library covers $13$ common text generation tasks and
Externí odkaz:
http://arxiv.org/abs/2212.13005
Autor:
Li, Junyi, Tang, Tianyi, Gong, Zheng, Yang, Lixin, Yu, Zhuohao, Chen, Zhipeng, Wang, Jingyuan, Zhao, Wayne Xin, Wen, Ji-Rong
Nowadays, pretrained language models (PLMs) have dominated the majority of NLP tasks. While, little research has been conducted on systematically evaluating the language abilities of PLMs. In this paper, we present a large-scale empirical study on ge
Externí odkaz:
http://arxiv.org/abs/2205.01523