Zobrazeno 1 - 10
of 16 803
pro vyhledávání: '"Gao,Peng"'
Autor:
Gao, Peng, Zhao, Liangyi
We establish sharp lower bounds for shifted moments of Dirichlet $L$-function of fixed modulus under the generalized Riemann hypothesis.
Externí odkaz:
http://arxiv.org/abs/2411.03692
Textual descriptions in cyber threat intelligence (CTI) reports, such as security articles and news, are rich sources of knowledge about cyber threats, crucial for organizations to stay informed about the rapidly evolving threat landscape. However, c
Externí odkaz:
http://arxiv.org/abs/2410.21060
We introduce a new paradigm for AutoRegressive (AR) image generation, termed Set AutoRegressive Modeling (SAR). SAR generalizes the conventional AR to the next-set setting, i.e., splitting the sequence into arbitrary sets containing multiple tokens,
Externí odkaz:
http://arxiv.org/abs/2410.10511
Hallucination, a phenomenon where multimodal large language models~(MLLMs) tend to generate textual responses that are plausible but unaligned with the image, has become one major hurdle in various MLLM-related applications. Several benchmarks have b
Externí odkaz:
http://arxiv.org/abs/2410.09962
Rectified Flow Transformers (RFTs) offer superior training and inference efficiency, making them likely the most viable direction for scaling up diffusion models. However, progress in generation resolution has been relatively slow due to data quality
Externí odkaz:
http://arxiv.org/abs/2410.07536
Model fusing has always been an important topic, especially in an era where large language models (LLM) and multi-modal language models (MLM) with different architectures, parameter sizes and training pipelines, are being created all the time. In thi
Externí odkaz:
http://arxiv.org/abs/2410.00363
Autor:
Yu, Qiaojun, Huang, Siyuan, Yuan, Xibin, Jiang, Zhengkai, Hao, Ce, Li, Xin, Chang, Haonan, Wang, Junbo, Liu, Liu, Li, Hongsheng, Gao, Peng, Lu, Cewu
Previous studies on robotic manipulation are based on a limited understanding of the underlying 3D motion constraints and affordances. To address these challenges, we propose a comprehensive paradigm, termed UniAff, that integrates 3D object-centric
Externí odkaz:
http://arxiv.org/abs/2409.20551
Autor:
Li, Xin, Huang, Siyuan, Yu, Qiaojun, Jiang, Zhengkai, Hao, Ce, Zhu, Yimeng, Li, Hongsheng, Gao, Peng, Lu, Cewu
Automating garment manipulation poses a significant challenge for assistive robotics due to the diverse and deformable nature of garments. Traditional approaches typically require separate models for each garment type, which limits scalability and ad
Externí odkaz:
http://arxiv.org/abs/2409.18082
Autor:
Lin, Weifeng, Wei, Xinyu, Zhang, Renrui, Zhuo, Le, Zhao, Shitian, Huang, Siyuan, Xie, Junlin, Qiao, Yu, Gao, Peng, Li, Hongsheng
This paper presents a versatile image-to-image visual assistant, PixWizard, designed for image generation, manipulation, and translation based on free-from language instructions. To this end, we tackle a variety of vision tasks into a unified image-t
Externí odkaz:
http://arxiv.org/abs/2409.15278