Zobrazeno 1 - 10
of 325
pro vyhledávání: '"Niu, Di"'
One key challenge to video restoration is to model the transition dynamics of video frames governed by motion. In this work, we propose TURTLE to learn the truncated causal history model for efficient and high-performing video restoration. Unlike tra
Externí odkaz:
http://arxiv.org/abs/2410.03936
Large Language Models (LLMs) require precise alignment with complex instructions to optimize their performance in real-world applications. As the demand for refined instruction tuning data increases, traditional methods that evolve simple seed instru
Externí odkaz:
http://arxiv.org/abs/2410.02795
Autor:
Jiang, Liyao, Hassanpour, Negar, Salameh, Mohammad, Singamsetti, Mohan Sai, Sun, Fengyu, Lu, Wei, Niu, Di
Text-to-image (T2I) diffusion models have demonstrated impressive capabilities in generating high-quality images given a text prompt. However, ensuring the prompt-image alignment remains a considerable challenge, i.e., generating images that faithful
Externí odkaz:
http://arxiv.org/abs/2408.11706
Autor:
Samadi, Mohammadreza, Han, Fred X., Salameh, Mohammad, Wu, Hao, Sun, Fengyu, Zhou, Chunhua, Niu, Di
Diffusion models have demonstrated strong performance in generative tasks, making them ideal candidates for image editing. Recent studies highlight their ability to apply desired edits effectively by following textual instructions, yet two key challe
Externí odkaz:
http://arxiv.org/abs/2408.08495
The effective alignment of Large Language Models (LLMs) with precise instructions is essential for their application in diverse real-world scenarios. Current methods focus on enhancing the diversity and complexity of training and evaluation samples,
Externí odkaz:
http://arxiv.org/abs/2406.11301
Inductive representation learning on temporal heterogeneous graphs is crucial for scalable deep learning on heterogeneous information networks (HINs) which are time-varying, such as citation networks. However, most existing approaches are not inducti
Externí odkaz:
http://arxiv.org/abs/2405.08013
Publikováno v:
ICML 2024
Understanding and explaining the predictions of Graph Neural Networks (GNNs), is crucial for enhancing their safety and trustworthiness. Subgraph-level explanations are gaining attention for their intuitive appeal. However, most existing subgraph-lev
Externí odkaz:
http://arxiv.org/abs/2405.01762
Autor:
Mills, Keith G., Han, Fred X., Salameh, Mohammad, Lu, Shengyao, Zhou, Chunhua, He, Jiao, Sun, Fengyu, Niu, Di
Neural Architecture Search is a costly practice. The fact that a search space can span a vast number of design choices with each architecture evaluation taking nontrivial overhead makes it hard for an algorithm to sufficiently explore candidate netwo
Externí odkaz:
http://arxiv.org/abs/2403.13293
Ensuring factual consistency between the summary and the original document is paramount in summarization tasks. Consequently, considerable effort has been dedicated to detecting inconsistencies. With the advent of Large Language Models (LLMs), recent
Externí odkaz:
http://arxiv.org/abs/2403.07557
The reasoning performance of Large Language Models (LLMs) on a wide range of problems critically relies on chain-of-thought prompting, which involves providing a few chain of thought demonstrations as exemplars in prompts. Recent work, e.g., Tree of
Externí odkaz:
http://arxiv.org/abs/2402.11140