Zobrazeno 1 - 10
of 30
pro vyhledávání: '"Zhou, Wenmeng"'
Autor:
Zhao, Yuze, Huang, Jintao, Hu, Jinghan, Wang, Xingjun, Mao, Yunlin, Zhang, Daoze, Jiang, Zeyinzi, Wu, Zhikai, Ai, Baole, Wang, Ang, Zhou, Wenmeng, Chen, Yingda
Recent development in Large Language Models (LLMs) and Multi-modal Large Language Models (MLLMs) have leverage Attention-based Transformer architectures and achieved superior performance and generalization capabilities. They have since covered extens
Externí odkaz:
http://arxiv.org/abs/2408.05517
Autor:
Liu, Xiangyan, Lan, Bo, Hu, Zhiyuan, Liu, Yang, Zhang, Zhicheng, Wang, Fei, Shieh, Michael, Zhou, Wenmeng
Large Language Models (LLMs) excel in stand-alone code tasks like HumanEval and MBPP, but struggle with handling entire code repositories. This challenge has prompted research on enhancing LLM-codebase interaction at a repository scale. Current solut
Externí odkaz:
http://arxiv.org/abs/2408.03910
Text-to-image diffusion models have shown the ability to learn a diverse range of concepts. However, it is worth noting that they may also generate undesirable outputs, consequently giving rise to significant security concerns. Specifically, issues s
Externí odkaz:
http://arxiv.org/abs/2408.01014
Recently, advancements in video synthesis have attracted significant attention. Video synthesis models such as AnimateDiff and Stable Video Diffusion have demonstrated the practical applicability of diffusion models in creating dynamic visual content
Externí odkaz:
http://arxiv.org/abs/2406.14130
A prevailing belief in attack and defense community is that the higher flatness of adversarial examples enables their better cross-model transferability, leading to a growing interest in employing sharpness-aware minimization and its variants. Howeve
Externí odkaz:
http://arxiv.org/abs/2311.06423
Autor:
Li, Chenliang, Chen, Hehong, Yan, Ming, Shen, Weizhou, Xu, Haiyang, Wu, Zhikai, Zhang, Zhicheng, Zhou, Wenmeng, Chen, Yingda, Cheng, Chen, Shi, Hongzhu, Zhang, Ji, Huang, Fei, Zhou, Jingren
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growi
Externí odkaz:
http://arxiv.org/abs/2309.00986
Autor:
Liu, Yang, Yu, Cheng, Shang, Lei, He, Yongyi, Wu, Ziheng, Wang, Xingjun, Xu, Chao, Xie, Haoyu, Wang, Weida, Zhao, Yuze, Zhu, Lin, Cheng, Chen, Chen, Weitao, Yao, Yuan, Zhou, Wenmeng, Xu, Jiaqi, Wang, Qiang, Chen, Yingda, Xie, Xuansong, Sun, Baigui
Recent advancement in personalized image generation have unveiled the intriguing capability of pre-trained text-to-image models on learning identity information from a collection of portrait images. However, existing solutions are vulnerable in produ
Externí odkaz:
http://arxiv.org/abs/2308.14256
Split learning enables collaborative deep learning model training while preserving data privacy and model security by avoiding direct sharing of raw data and model details (i.e., sever and clients only hold partial sub-networks and exchange intermedi
Externí odkaz:
http://arxiv.org/abs/2307.07916
Recent works have brought attention to the vulnerability of Federated Learning (FL) systems to gradient leakage attacks. Such attacks exploit clients' uploaded gradients to reconstruct their sensitive data, thereby compromising the privacy protection
Externí odkaz:
http://arxiv.org/abs/2212.02042
We develop an all-in-one computer vision toolbox named EasyCV to facilitate the use of various SOTA computer vision methods. Recently, we add YOLOX-PAI, an improved version of YOLOX, into EasyCV. We conduct ablation studies to investigate the influen
Externí odkaz:
http://arxiv.org/abs/2208.13040