Zobrazeno 1 - 10
of 1 100
pro vyhledávání: '"LIANG XiaoYu"'
Autor:
Mu, Lianrui, Zhou, Xingze, Zheng, Wenjie, Ye, Jiangnan, Liang, Xiaoyu, Yang, Yuchen, Bai, Jianhong, Zhuang, Jiedong, Hu, Haoji
Landmark-guided character animation generation is an important field. Generating character animations with facial features consistent with a reference image remains a significant challenge in conditional video generation, especially involving complex
Externí odkaz:
http://arxiv.org/abs/2412.08976
The success of large language models (LLMs) facilitate many parties to fine-tune LLMs on their own private data. However, this practice raises privacy concerns due to the memorization of LLMs. Existing solutions, such as utilizing synthetic data for
Externí odkaz:
http://arxiv.org/abs/2410.05725
Autor:
Wang, Ming, Liu, Yuanzhong, Liang, Xiaoyu, Huang, Yijie, Wang, Daling, Yang, Xiaocui, Shen, Sijia, Feng, Shi, Zhang, Xiaoming, Guan, Chaofeng, Zhang, Yifei
LLMs have demonstrated commendable performance across diverse domains. Nevertheless, formulating high-quality prompts to assist them in their work poses a challenge for non-AI experts. Existing research in prompt engineering suggests somewhat scatter
Externí odkaz:
http://arxiv.org/abs/2409.13449
Autor:
Liang, Xiaoyu, Yu, Jiayuan, Mu, Lianrui, Zhuang, Jiedong, Hu, Jiaqi, Yang, Yuchen, Ye, Jiangnan, Lu, Lu, Chen, Jian, Hu, Haoji
Although Visual-Language Models (VLMs) have shown impressive capabilities in tasks like visual question answering and image captioning, they still struggle with hallucinations. Analysis of attention distribution in these models shows that VLMs tend t
Externí odkaz:
http://arxiv.org/abs/2409.06485
Text-driven video editing has recently experienced rapid development. Despite this, evaluating edited videos remains a considerable challenge. Current metrics tend to fail to align with human perceptions, and effective quantitative metrics for video
Externí odkaz:
http://arxiv.org/abs/2408.11481
CLIP has achieved impressive zero-shot performance after pre-training on a large-scale dataset consisting of paired image-text data. Previous works have utilized CLIP by incorporating manually designed visual prompts like colored circles and blur mas
Externí odkaz:
http://arxiv.org/abs/2407.05578
Autor:
Xu, Boxin, Zhong, Luoyan, Zhang, Grace, Liang, Xiaoyu, Virtue, Diego, Madan, Rishabh, Bhattacharjee, Tapomayukh
Whole-arm tactile feedback is crucial for robots to ensure safe physical interaction with their surroundings. This paper introduces CushSense, a fabric-based soft and stretchable tactile-sensing skin designed for physical human-robot interaction (pHR
Externí odkaz:
http://arxiv.org/abs/2405.03155
Autor:
Liu, Xiaohong, Min, Xiongkuo, Zhai, Guangtao, Li, Chunyi, Kou, Tengchuan, Sun, Wei, Wu, Haoning, Gao, Yixuan, Cao, Yuqin, Zhang, Zicheng, Wu, Xiele, Timofte, Radu, Peng, Fei, Fu, Huiyuan, Ming, Anlong, Wang, Chuanming, Ma, Huadong, He, Shuai, Dou, Zifei, Chen, Shu, Zhang, Huacong, Xie, Haiyi, Wang, Chengwei, Chen, Baoying, Zeng, Jishen, Yang, Jianquan, Wang, Weigang, Fang, Xi, Lv, Xiaoxin, Yan, Jun, Zhi, Tianwu, Zhang, Yabin, Li, Yaohui, Li, Yang, Xu, Jingwen, Liu, Jianzhao, Liao, Yiting, Li, Junlin, Yu, Zihao, Lu, Yiting, Li, Xin, Motamednia, Hossein, Hosseini-Benvidi, S. Farhad, Guan, Fengbin, Mahmoudi-Aznaveh, Ahmad, Mansouri, Azadeh, Gankhuyag, Ganzorig, Yoon, Kihwan, Xu, Yifang, Fan, Haotian, Kong, Fangyuan, Zhao, Shiling, Dong, Weifeng, Yin, Haibing, Zhu, Li, Wang, Zhiling, Huang, Bingchen, Saha, Avinab, Mishra, Sandeep, Gupta, Shashank, Sureddi, Rajesh, Saha, Oindrila, Celona, Luigi, Bianco, Simone, Napoletano, Paolo, Schettini, Raimondo, Yang, Junfeng, Fu, Jing, Zhang, Wei, Cao, Wenzhi, Liu, Limei, Peng, Han, Yuan, Weijun, Li, Zhan, Cheng, Yihang, Deng, Yifan, Li, Haohui, Qu, Bowen, Li, Yao, Luo, Shuqing, Wang, Shunzhou, Gao, Wei, Lu, Zihao, Conde, Marcos V., Wang, Xinrui, Chen, Zhibo, Liao, Ruling, Ye, Yan, Wang, Qiulin, Li, Bing, Zhou, Zhaokun, Geng, Miao, Chen, Rui, Tao, Xin, Liang, Xiaoyu, Sun, Shangkun, Ma, Xingyuan, Li, Jiaze, Yang, Mengduo, Xu, Haoran, Zhou, Jie, Zhu, Shiding, Yu, Bohan, Chen, Pengfei, Xu, Xinrui, Shen, Jiabin, Duan, Zhichao, Asadi, Erfan, Liu, Jiahe, Yan, Qi, Qu, Youran, Zeng, Xiaohui, Wang, Lele, Liao, Renjie
This paper reports on the NTIRE 2024 Quality Assessment of AI-Generated Content Challenge, which will be held in conjunction with the New Trends in Image Restoration and Enhancement Workshop (NTIRE) at CVPR 2024. This challenge is to address a major
Externí odkaz:
http://arxiv.org/abs/2404.16687
The recent advancements in Text-to-Video Artificial Intelligence Generated Content (AIGC) have been remarkable. Compared with traditional videos, the assessment of AIGC videos encounters various challenges: visual inconsistency that defy common sense
Externí odkaz:
http://arxiv.org/abs/2404.13573
Autor:
Wang, Ming, Liu, Yuanzhong, Liang, Xiaoyu, Li, Songlian, Huang, Yijie, Zhang, Xiaoming, Shen, Sijia, Guan, Chaofeng, Wang, Daling, Feng, Shi, Zhang, Huaiwen, Zhang, Yifei, Zheng, Minghui, Zhang, Chi
LLMs have demonstrated commendable performance across diverse domains. Nevertheless, formulating high-quality prompts to instruct LLMs proficiently poses a challenge for non-AI experts. Existing research in prompt engineering suggests somewhat scatte
Externí odkaz:
http://arxiv.org/abs/2402.16929