Zobrazeno 1 - 10
of 12
pro vyhledávání: '"Hao, Hongkun"'
We present GSM-MC, a multiple-choice (MC) dataset constructed by collecting answers and incorrect predictions on GSM8K from 60 open-source models. Through extensive experiments, we show that LLMs' performance on the MC version of this popular benchma
Externí odkaz:
http://arxiv.org/abs/2405.11966
Autor:
Li, Chunyi, Wu, Haoning, Hao, Hongkun, Zhang, Zicheng, Kou, Tengchaun, Chen, Chaofeng, Bai, Lei, Liu, Xiaohong, Lin, Weisi, Zhai, Guangtao
With the evolution of Text-to-Image (T2I) models, the quality defects of AI-Generated Images (AIGIs) pose a significant barrier to their widespread adoption. In terms of both perception and alignment, existing models cannot always guarantee high-qual
Externí odkaz:
http://arxiv.org/abs/2404.18343
Autor:
Hu, Shujie, Zhou, Long, Liu, Shujie, Chen, Sanyuan, Hao, Hongkun, Pan, Jing, Liu, Xunying, Li, Jinyu, Sivasankaran, Sunit, Liu, Linquan, Wei, Furu
The recent advancements in large language models (LLMs) have revolutionized the field of natural language processing, progressively broadening their scope to multimodal perception and generation. However, effectively integrating listening capabilitie
Externí odkaz:
http://arxiv.org/abs/2404.00656
Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism th
Externí odkaz:
http://arxiv.org/abs/2402.18223
Autor:
Ai, Yiming, He, Zhiwei, Zhang, Ziyin, Zhu, Wenhong, Hao, Hongkun, Yu, Kai, Chen, Lingjun, Wang, Rui
In this study, we investigate the reliability of Large Language Models (LLMs) in professing human-like personality traits through responses to personality questionnaires. Our goal is to evaluate the consistency between LLMs' professed personality inc
Externí odkaz:
http://arxiv.org/abs/2402.14679
Autor:
He, Zhiwei, Zhou, Binglin, Hao, Hongkun, Liu, Aiwei, Wang, Xing, Tu, Zhaopeng, Zhang, Zhuosheng, Wang, Rui
Text watermarking technology aims to tag and identify content produced by large language models (LLMs) to prevent misuse. In this study, we introduce the concept of cross-lingual consistency in text watermarking, which assesses the ability of text wa
Externí odkaz:
http://arxiv.org/abs/2402.14007
Autor:
Li, Chunyi, Wu, Haoning, Zhang, Zicheng, Hao, Hongkun, Zhang, Kaiwei, Bai, Lei, Liu, Xiaohong, Min, Xiongkuo, Lin, Weisi, Zhai, Guangtao
With the rapid evolution of the Text-to-Image (T2I) model in recent years, their unsatisfactory generation result has become a challenge. However, uniformly refining AI-Generated Images (AIGIs) of different qualities not only limited optimization cap
Externí odkaz:
http://arxiv.org/abs/2401.01117
Large language models (LLMs) have made significant advancements in natural language processing and are concurrently extending the language ability to other modalities, such as speech and vision. Nevertheless, most of the previous work focuses on prom
Externí odkaz:
http://arxiv.org/abs/2401.00246
Autor:
Zhu, Wenhong, Hao, Hongkun, He, Zhiwei, Song, Yunze, Zhang, Yumeng, Hu, Hanxu, Wei, Yiran, Wang, Rui, Lu, Hongyuan
We are currently in an era of fierce competition among various large language models (LLMs) continuously pushing the boundaries of benchmark performance. However, genuinely assessing the capabilities of these LLMs has become a challenging and critica
Externí odkaz:
http://arxiv.org/abs/2311.09154
The decoding algorithm is critical for open-ended text generation, transforming latent representations into coherent and meaningful outputs. This paper investigates the self-reinforcement effect in text generation and the effectiveness of a repetitio
Externí odkaz:
http://arxiv.org/abs/2310.14971