Zobrazeno 1 - 10
of 115
pro vyhledávání: '"Shi, Shuhao"'
The presence of a large number of bots on social media has adverse effects. The graph neural network (GNN) can effectively leverage the social relationships between users and achieve excellent results in detecting bots. Recently, more and more GNN-ba
Externí odkaz:
http://arxiv.org/abs/2307.01968
The presence of a large number of bots on social media leads to adverse effects. Although Random forest algorithm is widely used in bot detection and can significantly enhance the performance of weak classifiers, it cannot utilize the interaction bet
Externí odkaz:
http://arxiv.org/abs/2304.08239
The presence of a large number of bots in Online Social Networks (OSN) leads to undesirable social effects. Graph neural networks (GNNs) are effective in detecting bots as they utilize user interactions. However, class-imbalanced issues can affect bo
Externí odkaz:
http://arxiv.org/abs/2302.06900
Autor:
Shi, Shuhao, Qiao, Kai, Chen, Jian, Yang, Shuai, Yang, Jie, Song, Baojie, Wang, Linyuan, Yan, Bin
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships,
Externí odkaz:
http://arxiv.org/abs/2301.01123
Select and Calibrate the Low-confidence: Dual-Channel Consistency based Graph Convolutional Networks
The Graph Convolutional Networks (GCNs) have achieved excellent results in node classification tasks, but the model's performance at low label rates is still unsatisfactory. Previous studies in Semi-Supervised Learning (SSL) for graph have focused on
Externí odkaz:
http://arxiv.org/abs/2205.03753
We present Adaptive Multi-layer Contrastive Graph Neural Networks (AMC-GNN), a self-supervised learning framework for Graph Neural Network, which learns feature representations of sample data without data labels. AMC-GNN generates two graph views by
Externí odkaz:
http://arxiv.org/abs/2109.14159
Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout
Deep neural networks(DNNs) is vulnerable to be attacked by adversarial examples. Black-box attack is the most threatening attack. At present, black-box attack methods mainly adopt gradient-based iterative attack methods, which usually limit the relat
Externí odkaz:
http://arxiv.org/abs/2106.01617
Autor:
Yang, Mian, Chen, Kaihua, Guo, Shenghui, Hou, Ming, Gao, Jiyun, Zhou, Junwen, Shi, Shuhao, Yang, Li
Publikováno v:
In Ceramics International 1 April 2024 50(7) Part A:10881-10888
Autor:
Shi, Shuhao, Du, Qian, Hou, Ming, Ye, Xiaolei, Yang, Li, Guo, Shenghui, Yi, Jianhong, Ehsan, Ullah, Zeng, Hongbo
Publikováno v:
In Journal of Environmental Sciences April 2024 138:112-120
One-step synthesis of color-tunable carbon dots-based organic long persistent luminescence materials
Publikováno v:
In Chemical Engineering Journal 1 January 2024 479