Zobrazeno 1 - 10
of 3 281
pro vyhledávání: '"Wang, Weiping"'
Autor:
Xu, Ce, Wang, Weiping
Publikováno v:
Comptes Rendus. Mathématique, Vol 361, Iss G6, Pp 979-1010 (2023)
In this paper, we study the alternating Euler $T$-sums and $\tilde{S}$-sums, which are infinite series involving (alternating) odd harmonic numbers, and have similar forms and close relations to the Dirichlet beta functions. By using the method of re
Externí odkaz:
https://doaj.org/article/99b16ab83b2c4ee58533f05c56bba5eb
The advent of large language models (LLMs) has significantly propelled the advancement of Role-Playing Agents (RPAs). However, current Role-Playing Agents predominantly focus on mimicking a character's fundamental attributes while neglecting the repl
Externí odkaz:
http://arxiv.org/abs/2411.02457
Autor:
Liu, Yufan, An, Jinyang, Zhang, Wanqian, Li, Ming, Wu, Dayan, Gu, Jingzi, Lin, Zheng, Wang, Weiping
The remarkable development of text-to-image generation models has raised notable security concerns, such as the infringement of portrait rights and the generation of inappropriate content. Concept erasure has been proposed to remove the model's knowl
Externí odkaz:
http://arxiv.org/abs/2410.09140
Autor:
Yang, Chenxu, Jia, Ruipeng, Gu, Naibin, Lin, Zheng, Chen, Siyuan, Pang, Chao, Yin, Weichong, Sun, Yu, Wu, Hua, Wang, Weiping
DPO is an effective preference optimization algorithm. However, the DPO-tuned models tend to overfit on the dispreferred samples, manifested as overly long generations lacking diversity. While recent regularization approaches have endeavored to allev
Externí odkaz:
http://arxiv.org/abs/2409.14836
The storage and recall of factual associations in auto-regressive transformer language models (LMs) have drawn a great deal of attention, inspiring knowledge editing by directly modifying the located model weights. Most editing works achieve knowledg
Externí odkaz:
http://arxiv.org/abs/2408.15091
Throughout rapid development of multimodal large language models, a crucial ingredient is a fair and accurate evaluation of their multimodal comprehension abilities. Although Visual Question Answering (VQA) could serve as a developed test field, limi
Externí odkaz:
http://arxiv.org/abs/2408.00300
Large Language Models (LLMs) have demonstrated exceptional proficiency in mathematical reasoning tasks due to their extensive parameter counts and training on vast datasets. Despite these capabilities, deploying LLMs is hindered by their computationa
Externí odkaz:
http://arxiv.org/abs/2407.10167
Model inversion (MI) attack reconstructs the private training data of a target model given its output, posing a significant threat to deep learning models and data privacy. On one hand, most of existing MI methods focus on searching for latent codes
Externí odkaz:
http://arxiv.org/abs/2407.08127
Autor:
Yang, Chenxu, Lin, Zheng, Tian, Chong, Pang, Liang, Wang, Lanrui, Tong, Zhengyang, Ho, Qirong, Cao, Yanan, Wang, Weiping
Grounding external knowledge can enhance the factuality of responses in dialogue generation. However, excessive emphasis on it might result in the lack of engaging and diverse expressions. Through the introduction of randomness in sampling, current a
Externí odkaz:
http://arxiv.org/abs/2407.05718
Structured pruning fundamentally reduces computational and memory overheads of large language models (LLMs) and offers a feasible solution for end-side LLM deployment. Structurally pruned models remain dense and high-precision, highly compatible with
Externí odkaz:
http://arxiv.org/abs/2407.05690