Zobrazeno 1 - 10
of 9 641
pro vyhledávání: '"An, Changlong"'
Generative models have shown impressive capabilities in synthesizing high-quality outputs across various domains. However, a persistent challenge is the occurrence of "hallucinations", where the model produces outputs that are plausible but invalid.
Externí odkaz:
http://arxiv.org/abs/2410.19217
Diffusion models (DMs) have shown promising results on single-image super-resolution and other image-to-image translation tasks. Benefiting from more computational resources and longer inference times, they are able to yield more realistic images. Ex
Externí odkaz:
http://arxiv.org/abs/2410.17752
Autor:
Lin, Tianqianjin, Yan, Pengwei, Song, Kaisong, Jiang, Zhuoren, Kang, Yangyang, Lin, Jun, Yuan, Weikang, Cao, Junjie, Sun, Changlong, Liu, Xiaozhong
Graph foundation models (GFMs) have recently gained significant attention. However, the unique data processing and evaluation setups employed by different studies hinder a deeper understanding of their progress. Additionally, current research tends t
Externí odkaz:
http://arxiv.org/abs/2410.14961
User Satisfaction Estimation is an important task and increasingly being applied in goal-oriented dialogue systems to estimate whether the user is satisfied with the service. It is observed that whether the user's needs are met often triggers various
Externí odkaz:
http://arxiv.org/abs/2410.09556
Autor:
Yuan, Weikang, Cao, Junjie, Jiang, Zhuoren, Kang, Yangyang, Lin, Jun, Song, Kaisong, lin, tianqianjin, Yan, Pengwei, Sun, Changlong, Liu, Xiaozhong
Large Language Models (LLMs) could struggle to fully understand legal theories and perform complex legal reasoning tasks. In this study, we introduce a challenging task (confusing charge prediction) to better evaluate LLMs' understanding of legal the
Externí odkaz:
http://arxiv.org/abs/2410.02507
Autor:
Liu, Chengyuan, Wang, Shihang, Qing, Lizhi, Kuang, Kun, Kang, Yangyang, Sun, Changlong, Wu, Fei
While Large Language Models (LLMs) demonstrate impressive generation abilities, they frequently struggle when it comes to specialized domains due to their limited domain-specific knowledge. Studies on domain-specific LLMs resort to expanding the voca
Externí odkaz:
http://arxiv.org/abs/2410.01188
Autor:
Li, Qiongxiu, Luo, Lixia, Gini, Agnese, Ji, Changlong, Hu, Zhanhao, Li, Xiao, Fang, Chengfang, Shi, Jie, Hu, Xiaolin
Federated Learning (FL) has emerged as a popular paradigm for collaborative learning among multiple parties. It is considered privacy-friendly because local data remains on personal devices, and only intermediate parameters -- such as gradients or mo
Externí odkaz:
http://arxiv.org/abs/2409.14260
Decentralized Federated Learning (DFL) has garnered attention for its robustness and scalability compared to Centralized Federated Learning (CFL). While DFL is commonly believed to offer privacy advantages due to the decentralized control of sensitiv
Externí odkaz:
http://arxiv.org/abs/2409.14261
Autor:
Liu, Chengyuan, Wang, Shihang, Zhao, Fubang, Kuang, Kun, Kang, Yangyang, Lu, Weiming, Sun, Changlong, Wu, Fei
Information Extraction (IE) and Text Classification (CLS) serve as the fundamental pillars of NLU, with both disciplines relying on analyzing input sequences to categorize outputs into pre-established schemas. However, there is no existing encoder-ba
Externí odkaz:
http://arxiv.org/abs/2409.05275
For a graph $G$, let $\mu_k(G):=\min~\{\max_{x\in S}d_G(x):~S\in \mathcal{S}_k\}$, where $\mathcal{S}_k$ is the set consisting of all independent sets $\{u_1,\ldots,u_k\}$ of $G$ such that some vertex, say $u_i$ ($1\leq i\leq k$), is at distance two
Externí odkaz:
http://arxiv.org/abs/2407.19149