Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Tuanhui Li"'
Autor:
Ruirui Yun, Ruiming Xu, Changsong Shi, Beibei Zhang, Tuanhui Li, Lei He, Tian Sheng, Zheng Chen
Publikováno v:
Nano Research.
Publikováno v:
Chemical Communications. 58:6602-6605
A high-loading atomically dispersed Co site catalyst has been constructed by a modified N-coordination-assisted strategy.
Publikováno v:
ChemistrySelect. 7
Publikováno v:
Inorganic chemistry. 61(40)
The exploration of efficient and low-consumption catalysts for carbon dioxide (CO
Autor:
Fagong Xu, Feiyang Zhan, Beibei Zhang, Tuanhui Li, Lei He, Liting Du, Shizhou Luo, Baishu Zheng, Ruirui Yun
Publikováno v:
European Journal of Inorganic Chemistry. 2022
Publikováno v:
Computer Vision – ECCV 2020 ISBN: 9783030585419
ECCV (22)
ECCV (22)
This work studies the sparse adversarial attack, which aims to generate adversarial perturbations onto partial positions of one benign image, such that the perturbed image is incorrectly predicted by one deep neural network (DNN) model. The sparse ad
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::7296000d3bbedefd90149cd2e7ca3b17
https://doi.org/10.1007/978-3-030-58542-6_3
https://doi.org/10.1007/978-3-030-58542-6_3
A novel model to predict phase equilibrium state of hydrates from the relationship of gas solubility
Publikováno v:
Frontiers in Energy Research, Vol 12 (2024)
The study of hydrate phase equilibrium is crucial for ensuring the safety of natural gas pipeline transportation and the process of hydrate recovery. While scientists typically focus on the chemical potential of hydrates, the role of gas solubility i
Externí odkaz:
https://doaj.org/article/c25935591fe7452fa9c6a1fd3f28869f
Publikováno v:
CVPR
This work studies the model compression for deep convolutional neural networks (CNNs) via filter pruning. The workflow of a traditional pruning consists of three sequential stages: pre-training the original model, selecting the pre-trained filters vi
Publikováno v:
Lecture Notes in Computer Science ISBN: 9783030305079
ICANN (3)
ICANN (3)
Neural networks can be fooled by adversarial examples. Recently, many methods have been proposed to generate adversarial examples, but these works mainly concentrate on the pixel-wise information, which limits the transferability of adversarial examp
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::7d2d792e928edc3d3d27cff454b902fa
https://doi.org/10.1007/978-3-030-30508-6_57
https://doi.org/10.1007/978-3-030-30508-6_57
Publikováno v:
Lecture Notes in Computer Science ISBN: 9783030305079
ICANN (3)
ICANN (3)
Deep neural networks (DNNs) have been widely applied in many areas. However, they are quite vulnerable to well-designed perturbations. Most recent methods of generating adversarial examples fail to limit the perturbations while keeping good transfera
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::bb144750151089af95b34112a5da7827
https://doi.org/10.1007/978-3-030-30508-6_52
https://doi.org/10.1007/978-3-030-30508-6_52