Zobrazeno 1 - 10
of 557
pro vyhledávání: '"ZHANG Jinhong"'
Publikováno v:
Jisuanji kexue yu tansuo, Vol 17, Iss 12, Pp 2880-2895 (2023)
The existing clustering algorithms are inaccurate to identify arbitrary clusters, sensitive to density changes within clusters, sensitive to outliers and difficult to determine the threshold. An adaptive threshold-constrained density cluster backbone
Externí odkaz:
https://doaj.org/article/47fc1299f88f4d7081c25dad74b9f02b
Investigation into an outbreak of suspected shellfish poisoning caused by consuming Bullacta exarata
Publikováno v:
Zhongguo shipin weisheng zazhi, Vol 35, Iss 8, Pp 1231-1234 (2023)
ObjectiveThis study aimed to assess control measures regarding the epidemiological characteristics of food-borne disease outbreaks to guide future prevention measures and treatment methods.MethodsDescriptive epidemiological methods were used to retro
Externí odkaz:
https://doaj.org/article/c7f092f52b9a40a99366d1667277cd31
Publikováno v:
Applied Mathematics and Nonlinear Sciences, Vol 9, Iss 1 (2024)
This paper analyzes the correlation between Civics teaching and student management in colleges and universities by establishing a multiple linear regression model to study the influence of Civics teaching on student management. Based on the questionn
Externí odkaz:
https://doaj.org/article/2a1244f042ed4820ae67152f5bf26fc4
Recently, Graph Neural Networks (GNNs), including Homogeneous Graph Neural Networks (HomoGNNs) and Heterogeneous Graph Neural Networks (HeteGNNs), have made remarkable progress in many physical scenarios, especially in communication applications. Des
Externí odkaz:
http://arxiv.org/abs/2310.09800
Extensive studies have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial attacks. Despite the significant progress in the attack success rate that has been made recently, the adversarial noise generated by most of the existi
Externí odkaz:
http://arxiv.org/abs/2310.09795
Previous studies have revealed that artificial intelligence (AI) systems are vulnerable to adversarial attacks. Among them, model extraction attacks fool the target model by generating adversarial examples on a substitute model. The core of such an a
Externí odkaz:
http://arxiv.org/abs/2310.09792
Publikováno v:
In Pattern Recognition February 2025 158
Autor:
Guo, Wenbin1 (AUTHOR), Zhang, Jinhong1 (AUTHOR), Yue, Huijun1 (AUTHOR), Lyu, Kexing1 (AUTHOR), Chen, Siyu1 (AUTHOR), Huang, Bixue1 (AUTHOR), Wang, Yiming1 (AUTHOR), Lei, Wenbin1 (AUTHOR) leiwb@mail.sysu.edu.cn
Publikováno v:
BMC Gastroenterology. 10/3/2024, Vol. 24 Issue 1, p1-7. 7p.
Autor:
Zhang, Qiue, Xiong, Yanxuan, Zhang, Jinhong, Liu, Boya, Chen, Tianyi, Liu, Shufeng, Dang, Chenyuan, Xu, Wei D., Ahmad, Hafiz Adeel, Liu, Tang
Publikováno v:
In Science of the Total Environment 10 October 2024 946
Autor:
Hou, Wei, Fang, Demin, Yin, Shugang, Deng, Yajing, Zhang, Jinhong, Wang, Siting, Liu, Liguo, Kong, Jingbo, Huang, Mei, Zhang, Xiujun, Dai, Bin, Feng, Xin
Publikováno v:
In Annals of Vascular Surgery September 2024 106:152-161