Zobrazeno 1 - 10
of 29 881
pro vyhledávání: '"Liang Chen"'
Autor:
Liang, Chen, Yang, Donghua, Liang, Zheng, Liang, Zhiyu, Zhang, Tianle, Xiao, Boyu, Yang, Yuqing, Wang, Wenqi, Wang, Hongzhi
Data analysis focuses on harnessing advanced statistics, programming, and machine learning techniques to extract valuable insights from vast datasets. An increasing volume and variety of research emerged, addressing datasets of diverse modalities, fo
Externí odkaz:
http://arxiv.org/abs/2501.01631
Autor:
Huang, Lianghua, Wang, Wei, Wu, Zhi-Fan, Shi, Yupeng, Liang, Chen, Shen, Tong, Zhang, Han, Dou, Huanzhang, Liu, Yu, Zhou, Jingren
Recent research arXiv:2410.15027 arXiv:2410.23775 has highlighted the inherent in-context generation capabilities of pretrained diffusion transformers (DiTs), enabling them to seamlessly adapt to diverse visual tasks with minimal or no architectural
Externí odkaz:
http://arxiv.org/abs/2412.12571
Autor:
Liang, Chen, Huang, Lianghua, Fang, Jingwu, Dou, Huanzhang, Wang, Wei, Wu, Zhi-Fan, Shi, Yupeng, Zhang, Junge, Zhao, Xin, Liu, Yu
Real-world design tasks - such as picture book creation, film storyboard development using character sets, photo retouching, visual effects, and font transfer - are highly diverse and complex, requiring deep interpretation and extraction of various e
Externí odkaz:
http://arxiv.org/abs/2412.11767
Autor:
Huang, Lianghua, Wang, Wei, Wu, Zhi-Fan, Shi, Yupeng, Dou, Huanzhang, Liang, Chen, Feng, Yutong, Liu, Yu, Zhou, Jingren
Recent research arXiv:2410.15027 has explored the use of diffusion transformers (DiTs) for task-agnostic image generation by simply concatenating attention tokens across images. However, despite substantial computational resources, the fidelity of th
Externí odkaz:
http://arxiv.org/abs/2410.23775
Autor:
Huang, Lianghua, Wang, Wei, Wu, Zhi-Fan, Dou, Huanzhang, Shi, Yupeng, Feng, Yutong, Liang, Chen, Liu, Yu, Zhou, Jingren
While large language models (LLMs) have revolutionized natural language processing with their task-agnostic capabilities, visual generation tasks such as image translation, style transfer, and character customization still rely heavily on supervised,
Externí odkaz:
http://arxiv.org/abs/2410.15027
Autor:
Yue, Kun, Zhang, Mingshan, Chen, Jingruo, Yu, Chun, Nie, Kexin, Gao, Zhiqi, Yang, Jinghan, Liang, Chen, Shi, Yuanchun
Situational visual impairments (SVIs) significantly impact mobile readability, causing user discomfort and hindering information access. This paper introduces SituFont, a novel just-in-time adaptive intervention (JITAI) system designed to enhance mob
Externí odkaz:
http://arxiv.org/abs/2410.09562
Autor:
Zhou, Fang, Huang, Yaning, Liang, Dong, Li, Dai, Zhang, Zhongke, Wang, Kai, Xin, Xiao, Aboelela, Abdallah, Jiang, Zheliang, Wang, Yang, Song, Jeff, Zhang, Wei, Liang, Chen, Li, Huayu, Sun, ChongLin, Yang, Hang, Qu, Lei, Shu, Zhan, Yuan, Mindi, Maccherani, Emanuele, Hayat, Taha, Guo, John, Puvvada, Varna, Pashkevich, Uladzimir
The increasing complexity of deep learning models used for calculating user representations presents significant challenges, particularly with limited computational resources and strict service-level agreements (SLAs). Previous research efforts have
Externí odkaz:
http://arxiv.org/abs/2410.06497
Recent advances in generative AI technologies like large language models have boosted the incorporation of AI assistance in writing workflows, leading to the rise of a new paradigm of human-AI co-creation in writing. To understand how people perceive
Externí odkaz:
http://arxiv.org/abs/2410.04545
The data scarcity problem is a crucial factor that hampers the model performance of IMU-based human motion capture. However, effective data augmentation for IMU-based motion capture is challenging, since it has to capture the physical relations and c
Externí odkaz:
http://arxiv.org/abs/2409.14101
Chain-of-thought prompting significantly boosts the reasoning ability of large language models but still faces three issues: hallucination problem, restricted interpretability, and uncontrollable generation. To address these challenges, we present Ag
Externí odkaz:
http://arxiv.org/abs/2409.12411