Zobrazeno 1 - 10
of 14
pro vyhledávání: '"Pan, Shirui"'
Autor:
Zhang, Zhaoxi, Zhang, Xiaomei, Zhang, Yanjun, Zhang, Leo Yu, Chen, Chao, Hu, Shengshan, Gill, Asif, Pan, Shirui
The Large Language Model (LLM) watermark is a newly emerging technique that shows promise in addressing concerns surrounding LLM copyright, monitoring AI-generated text, and preventing its misuse. The LLM watermark scheme commonly includes generating
Externí odkaz:
http://arxiv.org/abs/2405.19677
Model extraction attacks (MEAs) enable an attacker to replicate the functionality of a victim deep neural network (DNN) model by only querying its API service remotely, posing a severe threat to the security and integrity of pay-per-query DNN-based s
Externí odkaz:
http://arxiv.org/abs/2403.07673
Autor:
Liu, Xin, Zhang, Yuxiang, Wu, Meng, Yan, Mingyu, He, Kun, Yan, Wei, Pan, Shirui, Ye, Xiaochun, Fan, Dongrui
Edge perturbation is a basic method to modify graph structures. It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs), i.e., graph data augmentation and attack. Surprisingly, both veins of edge
Externí odkaz:
http://arxiv.org/abs/2403.07943
The deployment of Graph Neural Networks (GNNs) within Machine Learning as a Service (MLaaS) has opened up new attack surfaces and an escalation in security concerns regarding model-centric attacks. These attacks can directly manipulate the GNN model
Externí odkaz:
http://arxiv.org/abs/2312.07870
The emergence of Graph Neural Networks (GNNs) in graph data analysis and their deployment on Machine Learning as a Service platforms have raised critical concerns about data misuse during model training. This situation is further exacerbated due to t
Externí odkaz:
http://arxiv.org/abs/2312.07861
Autor:
Wei, Jiaheng, Zhang, Yanjun, Zhang, Leo Yu, Chen, Chao, Pan, Shirui, Ong, Kok-Leong, Zhang, Jun, Xiang, Yang
Federated Learning (FL) enables distributed participants (e.g., mobile devices) to train a global model without sharing data directly to a central server. Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which ai
Externí odkaz:
http://arxiv.org/abs/2309.07415
The scalability problem has been one of the most significant barriers limiting the adoption of blockchains. Blockchain sharding is a promising approach to this problem. However, the sharding mechanism introduces a significant number of cross-shard tr
Externí odkaz:
http://arxiv.org/abs/2212.11584
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks. However, GNNs are at risk of adversarial attacks. Two primary limitations of the current evasion attack methods are highlighted: (1) The current GradArgmax ignores
Externí odkaz:
http://arxiv.org/abs/2202.12993
Graph Neural Networks (GNNs) are widely adopted to analyse non-Euclidean data, such as chemical networks, brain networks, and social networks, modelling complex relationships and interdependency between objects. Recently, Membership Inference Attack
Externí odkaz:
http://arxiv.org/abs/2110.08760
Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client. Unfortunately, prior works focus on the model
Externí odkaz:
http://arxiv.org/abs/2010.12751