Zobrazeno 1 - 10
of 49
pro vyhledávání: '"Dai, Jiazhu"'
Autor:
Dai, Jiazhu, Sun, Haoyu
Graph Convolutional Networks (GCNs) have shown excellent performance in dealing with various graph structures such as node classification, graph classification and other tasks. However,recent studies have shown that GCNs are vulnerable to a novel thr
Externí odkaz:
http://arxiv.org/abs/2404.12704
Autor:
Dai, Jiazhu, Sun, Haoyu
Graph Neural Networks (GNNs) are a class of deep learning models capable of processing graph-structured data, and they have demonstrated significant performance in a variety of real-world applications. Recent studies have found that GNN models are vu
Externí odkaz:
http://arxiv.org/abs/2401.02663
Autor:
Dai, Jiazhu, Xiong, Zhipeng
Graph convolutional networks (GCNs) have been very effective in addressing the issue of various graph-structured related tasks. However, recent research has shown that GCNs are vulnerable to a new type of threat called a backdoor attack, where the ad
Externí odkaz:
http://arxiv.org/abs/2302.14353
Autor:
Dai, Jiazhu, Xiong, Siwei
Capsule networks (CapsNets) are new neural networks that classify images based on the spatial relationships of features. By analyzing the pose of features and their relative positions, it is more capable to recognize images after affine transformatio
Externí odkaz:
http://arxiv.org/abs/2202.13755
Publikováno v:
In Neurocomputing 1 October 2024 600
Graph-structured data exist in numerous applications in real life. As a state-of-the-art graph neural network, the graph convolutional network (GCN) plays an important role in processing graph-structured data. However, a recent study reported that GC
Externí odkaz:
http://arxiv.org/abs/2011.14365
Autor:
Dai, Jiazhu, Xiong, Siwei
Capsule network is a type of neural network that uses the spatial relationship between features to classify images. By capturing the poses and relative positions between features, its ability to recognize affine transformation is improved, and it sur
Externí odkaz:
http://arxiv.org/abs/2010.07230
Autor:
Chen, Chuanshuai, Dai, Jiazhu
It has been proved that deep neural networks are facing a new threat called backdoor attacks, where the adversary can inject backdoors into the neural network model through poisoning the training dataset. When the input containing some special patter
Externí odkaz:
http://arxiv.org/abs/2007.12070
Autor:
Dai, Jiazhu, Shu, Le
Convolutional neural networks (CNN) have become one of the most popular machine learning tools and are being applied in various tasks, however, CNN models are vulnerable to universal perturbations, which are usually human-imperceptible but can cause
Externí odkaz:
http://arxiv.org/abs/1911.01172
Autor:
Dai, Jiazhu, Chen, Chuanshuai
With the widespread use of deep learning system in many applications, the adversary has strong incentive to explore vulnerabilities of deep neural networks and manipulate them. Backdoor attacks against deep neural networks have been reported to be a
Externí odkaz:
http://arxiv.org/abs/1905.12457