Zobrazeno 1 - 10
of 810
pro vyhledávání: '"Guo Wenbo"'
Existing works have established multiple benchmarks to highlight the security risks associated with Code GenAI. These risks are primarily reflected in two areas: a model potential to generate insecure code (insecure coding) and its utility in cyberat
Externí odkaz:
http://arxiv.org/abs/2410.11096
We propose BlockFound, a customized foundation model for anomaly blockchain transaction detection. Unlike existing methods that rely on rule-based systems or directly apply off-the-shelf large language models, BlockFound introduces a series of custom
Externí odkaz:
http://arxiv.org/abs/2410.04039
Autor:
Zheng, Xu, Shirani, Farhad, Chen, Zhuomin, Lin, Chaohao, Cheng, Wei, Guo, Wenbo, Luo, Dongsheng
Recent research has developed a number of eXplainable AI (XAI) techniques. Although extracting meaningful insights from deep learning models, how to properly evaluate these XAI methods remains an open problem. The most widely used approach is to pert
Externí odkaz:
http://arxiv.org/abs/2410.02970
Autor:
Guo, Wenbo, Liu, Chengwei, Wang, Limin, Wu, Jiahui, Xu, Zhengzi, Huang, Cheng, Fang, Yong, Liu, Yang
The rise of malicious packages in public registries poses a significant threat to software supply chain (SSC) security. Although academia and industry employ methods like software composition analysis (SCA) to address this issue, existing approaches
Externí odkaz:
http://arxiv.org/abs/2409.15049
Blockchain adoption has surged with the rise of Decentralized Finance (DeFi) applications. However, the significant value of digital assets managed by DeFi protocols makes them prime targets for attacks. Current smart contract vulnerability detection
Externí odkaz:
http://arxiv.org/abs/2407.06348
Publikováno v:
Frontiers in Earth Science, Vol 10 (2022)
An ideal metallogenic and prospecting model has important guiding significance for aluminum ore development and geophysical exploration. Previous research in this field only focused on ore body evaluations and metallogenic belts. Selection of reasona
Externí odkaz:
https://doaj.org/article/ecc7081d30f24f27934d65c73ab72081
Modern large language model (LLM) developers typically conduct a safety alignment to prevent an LLM from generating unethical or harmful content. Recent studies have discovered that the safety alignment of LLMs can be bypassed by jailbreaking prompts
Externí odkaz:
http://arxiv.org/abs/2406.08725
Recent studies developed jailbreaking attacks, which construct jailbreaking prompts to fool LLMs into responding to harmful questions. Early-stage jailbreaking attacks require access to model internals or significant human efforts. More advanced atta
Externí odkaz:
http://arxiv.org/abs/2406.08705
Along with the remarkable successes of Language language models, recent research also started to explore the security threats of LLMs, including jailbreaking attacks. Attackers carefully craft jailbreaking prompts such that a target LLM will respond
Externí odkaz:
http://arxiv.org/abs/2405.20653
Autor:
Nie, Yuzhou., Wang, Yanting., Jia, Jinyuan., De Lucia, Michael J., Bastian, Nathaniel D., Guo, Wenbo., Song, Dawn.
One key challenge in backdoor attacks against large foundation models is the resource limits. Backdoor attacks usually require retraining the target model, which is impractical for very large foundation models. Existing backdoor attacks are mainly de
Externí odkaz:
http://arxiv.org/abs/2405.16783