Zobrazeno 1 - 10
of 25
pro vyhledávání: '"Han, Tingxu"'
Autor:
Han, Tingxu, Sun, Weisong, Hu, Yanrong, Fang, Chunrong, Zhang, Yonglong, Ma, Shiqing, Zheng, Tao, Chen, Zhenyu, Wang, Zhenting
Text-to-image diffusion models have shown an impressive ability to generate high-quality images from input textual descriptions. However, concerns have been raised about the potential for these models to create content that infringes on copyrights or
Externí odkaz:
http://arxiv.org/abs/2412.00580
Autor:
Chen, Yuchen, Sun, Weisong, Fang, Chunrong, Chen, Zhenpeng, Ge, Yifei, Han, Tingxu, Zhang, Quanjun, Liu, Yang, Chen, Zhenyu, Xu, Baowen
Language models for code (CodeLMs) have emerged as powerful tools for code-related tasks, outperforming traditional methods and standard machine learning approaches. However, these models are susceptible to security vulnerabilities, drawing increasin
Externí odkaz:
http://arxiv.org/abs/2410.15631
Autor:
Han, Tingxu, Sun, Weisong, Ding, Ziqi, Fang, Chunrong, Qian, Hanwei, Li, Jiaxun, Chen, Zhenyu, Zhang, Xiangyu
Self-supervised learning (SSL) is increasingly attractive for pre-training encoders without requiring labeled data. Downstream tasks built on top of those pre-trained encoders can achieve nearly state-of-the-art performance. The pre-trained encoders
Externí odkaz:
http://arxiv.org/abs/2406.03508
Autor:
Zhang, Hanrong, Wang, Zhenting, Han, Tingxu, Jin, Mingyu, Zhan, Chenlu, Du, Mengnan, Wang, Hongwei, Ma, Shiqing
Self-supervised learning models are vulnerable to backdoor attacks. Existing backdoor attacks that are effective in self-supervised learning often involve noticeable triggers, like colored patches, which are vulnerable to human inspection. In this pa
Externí odkaz:
http://arxiv.org/abs/2405.14672
Autor:
Han, Tingxu, Huang, Shenghan, Ding, Ziqi, Sun, Weisong, Feng, Yebo, Fang, Chunrong, Li, Jun, Qian, Hanwei, Wu, Cong, Zhang, Quanjun, Liu, Yang, Chen, Zhenyu
In this paper, we study a defense against poisoned encoders in SSL called distillation, which is a defense used in supervised learning originally. Distillation aims to distill knowledge from a given model (a.k.a the teacher net) and transfer it to an
Externí odkaz:
http://arxiv.org/abs/2403.03846
Autor:
Sun, Weisong, Fang, Chunrong, Chen, Yuchen, Zhang, Quanjun, Tao, Guanhong, Han, Tingxu, Ge, Yifei, You, Yudu, Luo, Bin
(Source) Code summarization aims to automatically generate summaries/comments for a given code snippet in the form of natural language. Such summaries play a key role in helping developers understand and maintain source code. Existing code summarizat
Externí odkaz:
http://arxiv.org/abs/2206.07245
Code search is a widely used technique by developers during software development. It provides semantically similar implementations from a large code corpus to developers based on their queries. Existing techniques leverage deep learning models to con
Externí odkaz:
http://arxiv.org/abs/2202.08029
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
This is the replication package for the paper "Code search based on Context-aware Code Translation", accepted at ICSE 2022.The full package contains all the details needed to reproduce the result as we claimed in our paper. In our package, the README
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::71744defdadbcb2c85d9c2aa5638a2a1
The artifact of paper ' Ruler: Discriminative and Iterative Adversarial Training for Deep Neural Network Fairness ', published on ESEC/FSE 2022.
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::3538d0816e3c389e7e16446ece633921