Zobrazeno 1 - 10
of 7 756
pro vyhledávání: '"luo, Yong"'
Autor:
Zhang, Ziyi, Shen, Li, Zhang, Sen, Ye, Deheng, Luo, Yong, Shi, Miaojing, Du, Bo, Tao, Dacheng
Aligning diffusion models with downstream objectives is essential for their practical applications. However, standard alignment methods often struggle with step generalization when directly applied to few-step diffusion models, leading to inconsisten
Externí odkaz:
http://arxiv.org/abs/2411.11727
We study the following one-dimensional cubic nonlinear Schr\"{o}dinger system: \[ u_i''+2\Big(\sum_{k=1}^Nu_k^2\Big)u_i=-\mu_iu_i \ \,\ \mbox{in}\, \ \mathbb{R} , \ \ i=1, 2, \cdots, N, \] where $\mu_1\leq\mu_2\leq\cdots\leq\mu_N<0$ and $N\ge 2$. In
Externí odkaz:
http://arxiv.org/abs/2411.10748
Autor:
Shen, Li, Tang, Anke, Yang, Enneng, Guo, Guibing, Luo, Yong, Zhang, Lefei, Cao, Xiaochun, Du, Bo, Tao, Dacheng
Multi-task learning (MTL) leverages a shared model to accomplish multiple tasks and facilitate knowledge transfer. Recent research on task arithmetic-based MTL demonstrates that merging the parameters of independently fine-tuned models can effectivel
Externí odkaz:
http://arxiv.org/abs/2410.21804
Recent advancements in multimodal fusion have witnessed the remarkable success of vision-language (VL) models, which excel in various multimodal applications such as image captioning and visual question answering. However, building VL models requires
Externí odkaz:
http://arxiv.org/abs/2410.17779
Quantum Approximate Optimization Algorithm (QAOA) and its variants exhibit immense potential in tackling combinatorial optimization challenges. However, their practical realization confronts a dilemma: the requisite circuit depth for satisfactory per
Externí odkaz:
http://arxiv.org/abs/2409.18692
Incremental learning is nontrivial due to severe catastrophic forgetting. Although storing a small amount of data on old tasks during incremental learning is a feasible solution, current strategies still do not 1) adequately address the class bias pr
Externí odkaz:
http://arxiv.org/abs/2409.05620
Large language models (LLMs) have shown remarkable capabilities in code generation. However, the effects of hallucinations (e.g., output noise) make it particularly challenging for LLMs to generate high-quality code in one pass. In this work, we prop
Externí odkaz:
http://arxiv.org/abs/2409.05923
Multimodal large language models (MLLMs) have experienced significant advancements recently, but still struggle to recognize and interpret intricate details in high-resolution (HR) images effectively. While state-of-the-art (SOTA) MLLMs claim to proc
Externí odkaz:
http://arxiv.org/abs/2408.15556
Deep model training on extensive datasets is increasingly becoming cost-prohibitive, prompting the widespread adoption of deep model fusion techniques to leverage knowledge from pre-existing models. From simple weight averaging to more sophisticated
Externí odkaz:
http://arxiv.org/abs/2408.10174
In a real federated learning (FL) system, communication overhead for passing model parameters between the clients and the parameter server (PS) is often a bottleneck. Hierarchical federated learning (HFL) that poses multiple edge servers (ESs) betwee
Externí odkaz:
http://arxiv.org/abs/2408.09762