Zobrazeno 1 - 10
of 70 736
pro vyhledávání: '"Wei,Chen"'
Recent progress in generative diffusion models has greatly advanced text-to-video generation. While text-to-video models trained on large-scale, diverse datasets can produce varied outputs, these generations often deviate from user preferences, highl
Externí odkaz:
http://arxiv.org/abs/2412.14167
Autor:
Lu, Taiming, Shu, Tianmin, Xiao, Junfei, Ye, Luoxin, Wang, Jiahao, Peng, Cheng, Wei, Chen, Khashabi, Daniel, Chellappa, Rama, Yuille, Alan, Chen, Jieneng
Understanding, navigating, and exploring the 3D physical real world has long been a central challenge in the development of artificial intelligence. In this work, we take a step toward this goal by introducing GenEx, a system capable of planning comp
Externí odkaz:
http://arxiv.org/abs/2412.09624
Multimodal incremental learning needs to digest the information from multiple modalities while concurrently learning new knowledge without forgetting the previously learned information. There are numerous challenges for this task, mainly including th
Externí odkaz:
http://arxiv.org/abs/2412.09549
While large vision-language models (LVLMs) have shown impressive capabilities in generating plausible responses correlated with input visual contents, they still suffer from hallucinations, where the generated text inaccurately reflects visual conten
Externí odkaz:
http://arxiv.org/abs/2412.06775
Autor:
Li, Wei-Chen, Lin, Chun-Yeon
Common imaging techniques for detecting structural defects typically require sampling at more than twice the spatial frequency to achieve a target resolution. This study introduces a novel framework for imaging structural defects using significantly
Externí odkaz:
http://arxiv.org/abs/2412.01055
Text-to-image (T2I) models have shown remarkable progress, but their potential to generate harmful content remains a critical concern in the ML community. While various safety mechanisms have been developed, the field lacks systematic tools for evalu
Externí odkaz:
http://arxiv.org/abs/2411.16769
We consider regret minimization in low-rank MDPs with fixed transition and adversarial losses. Previous work has investigated this problem under either full-information loss feedback with unknown transitions (Zhao et al., 2024), or bandit loss feedba
Externí odkaz:
http://arxiv.org/abs/2411.06739
Autor:
Song, Kaidong, Zhou, Jingyuan, Wei, Chen, Ponnuchamy, Ashok, Bappy, Md Omarsany, Liao, Yuxuan, Jiang, Qiang, Du, Yipu, Evans, Connor J., Wyatt, Brian C., O'Sullivan, Thomas, Roeder, Ryan K., Anasori, Babak, Hoffman, Anthony J., Jin, Lihua, Duan, Xiangfeng, Zhang, Yanliang
Stretchable electronics capable of conforming to nonplanar and dynamic human body surfaces are central for creating implantable and on-skin devices for high-fidelity monitoring of diverse physiological signals. While various strategies have been deve
Externí odkaz:
http://arxiv.org/abs/2411.03339
Despite significant progress in visual decoding with fMRI data, its high cost and low temporal resolution limit widespread applicability. To address these challenges, we introduce RealMind, a novel EEG-based visual decoding framework that leverages m
Externí odkaz:
http://arxiv.org/abs/2410.23754
We consider realizable contextual bandits with general function approximation, investigating how small reward variance can lead to better-than-minimax regret bounds. Unlike in minimax bounds, we show that the eluder dimension $d_\text{elu}$$-$a compl
Externí odkaz:
http://arxiv.org/abs/2410.12713