Zobrazeno 1 - 10
of 178
pro vyhledávání: '"Jia, Jinyuan."'
Graph Neural Networks (GNNs) have shown promising results in modeling graphs in various tasks. The training of GNNs, especially on specialized tasks such as bioinformatics, demands extensive expert annotations, which are expensive and usually contain
Externí odkaz:
http://arxiv.org/abs/2411.11197
Deep regression models are used in a wide variety of safety-critical applications, but are vulnerable to backdoor attacks. Although many defenses have been proposed for classification models, they are ineffective as they do not consider the uniquenes
Externí odkaz:
http://arxiv.org/abs/2411.04811
Automatically extracting personal information--such as name, phone number, and email address--from publicly available profiles at a large scale is a stepstone to many other security attacks including spear phishing. Traditional methods--such as regul
Externí odkaz:
http://arxiv.org/abs/2408.07291
Eye gaze contains rich information about human attention and cognitive processes. This capability makes the underlying technology, known as gaze tracking, a critical enabler for many ubiquitous applications and has triggered the development of easy-t
Externí odkaz:
http://arxiv.org/abs/2408.00950
Federated graph learning (FedGL) is an emerging federated learning (FL) framework that extends FL to learn graph data from diverse sources. FL for non-graph data has shown to be vulnerable to backdoor attacks, which inject a shared backdoor trigger i
Externí odkaz:
http://arxiv.org/abs/2407.08935
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns. Watermarking AI-generated content is a key technology to address these concerns and has been widely deployed in industry. However, watermarking is v
Externí odkaz:
http://arxiv.org/abs/2407.04086
Explainable Graph Neural Network (GNN) has emerged recently to foster the trust of using GNNs. Existing GNN explainers are developed from various perspectives to enhance the explanation performance. We take the first step to study GNN explainers unde
Externí odkaz:
http://arxiv.org/abs/2406.03193
The robustness of convolutional neural networks (CNNs) is vital to modern AI-driven systems. It can be quantified by formal verification by providing a certified lower bound, within which any perturbation does not alter the original input's classific
Externí odkaz:
http://arxiv.org/abs/2406.00699
In Federated Learning (FL), a set of clients collaboratively train a machine learning model (called global model) without sharing their local training data. The local training data of clients is typically non-i.i.d. and heterogeneous, resulting in va
Externí odkaz:
http://arxiv.org/abs/2405.20975
Autor:
Nie, Yuzhou., Wang, Yanting., Jia, Jinyuan., De Lucia, Michael J., Bastian, Nathaniel D., Guo, Wenbo., Song, Dawn.
One key challenge in backdoor attacks against large foundation models is the resource limits. Backdoor attacks usually require retraining the target model, which is impractical for very large foundation models. Existing backdoor attacks are mainly de
Externí odkaz:
http://arxiv.org/abs/2405.16783