Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Lv, Zheqi"'
Autor:
Lv, Zheqi, He, Shaoxuan, Zhan, Tianyu, Zhang, Shengyu, Zhang, Wenqiao, Chen, Jingyuan, Zhao, Zhou, Wu, Fei
Dynamic sequential recommendation (DSR) can generate model parameters based on user behavior to improve the personalization of sequential recommendation under various user preferences. However, it faces the challenges of large parameter search space
Externí odkaz:
http://arxiv.org/abs/2408.00123
Due to the continuously improving capabilities of mobile edges, recommender systems start to deploy models on edges to alleviate network congestion caused by frequent mobile requests. Several studies have leveraged the proximity of edge-side to real-
Externí odkaz:
http://arxiv.org/abs/2406.08804
Autor:
Ji, Wei, Li, Li, Lv, Zheqi, Zhang, Wenqiao, Li, Mengze, Wan, Zhen, Lei, Wenqiang, Zimmermann, Roger
In our increasingly interconnected world, where intelligent devices continually amass copious personalized multi-modal data, a pressing need arises to deliver high-quality, personalized device-aware services. However, this endeavor presents a multifa
Externí odkaz:
http://arxiv.org/abs/2406.01601
Autor:
Zheng, Haoyu, Zhang, Wenqiao, Wang, Yaoke, Zhou, Hao, Liu, Jiang, Li, Juncheng, Lv, Zheqi, Tang, Siliang, Zhuang, Yueting
Revolutionary advancements in text-to-image models have unlocked new dimensions for sophisticated content creation, e.g., text-conditioned image editing, allowing us to edit the diverse images that convey highly complex visual concepts according to t
Externí odkaz:
http://arxiv.org/abs/2404.13558
Autor:
Zhang, Wenqiao, Lin, Tianwei, Liu, Jiang, Shu, Fangxun, Li, Haoyuan, Zhang, Lei, Wanggui, He, Zhou, Hao, Lv, Zheqi, Jiang, Hao, Li, Juncheng, Tang, Siliang, Zhuang, Yueting
Recent advancements indicate that scaling up Multimodal Large Language Models (MLLMs) effectively enhances performance on downstream multimodal tasks. The prevailing MLLM paradigm, \emph{e.g.}, LLaVA, transforms visual features into text-like tokens
Externí odkaz:
http://arxiv.org/abs/2403.13447
Due to privacy or patent concerns, a growing number of large models are released without granting access to their training data, making transferring their knowledge inefficient and problematic. In response, Data-Free Knowledge Distillation (DFKD) met
Externí odkaz:
http://arxiv.org/abs/2403.07030
The rapid advancement of Large Language Models (LLMs) has revolutionized various sectors by automating routine tasks, marking a step toward the realization of Artificial General Intelligence (AGI). However, they still struggle to accommodate the dive
Externí odkaz:
http://arxiv.org/abs/2402.12408
Autor:
Chen, Zhengyu, Xiao, Teng, Kuang, Kun, Lv, Zheqi, Zhang, Min, Yang, Jinluan, Lu, Chengqiang, Yang, Hongxia, Wu, Fei
Graph Neural Networks (GNNs) show promising results for graph tasks. However, existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data. The cardinal impetus underlying the severe
Externí odkaz:
http://arxiv.org/abs/2312.12475
Autor:
Zhang, Wenqiao, Lv, Zheqi, Zhou, Hao, Liu, Jia-Wei, Li, Juncheng, Li, Mengze, Tang, Siliang, Zhuang, Yueting
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.This setting neglects the more practical scenario where training data are collected from
Externí odkaz:
http://arxiv.org/abs/2311.12905
Autor:
Gao, Minghe, Li, Juncheng, Fei, Hao, Pang, Liang, Ji, Wei, Wang, Guoming, Lv, Zheqi, Zhang, Wenqiao, Tang, Siliang, Zhuang, Yueting
Visual programming, a modular and generalizable paradigm, integrates different modules and Python operators to solve various vision-language tasks. Unlike end-to-end models that need task-specific data, it advances in performing visual processing and
Externí odkaz:
http://arxiv.org/abs/2311.12890