Zobrazeno 1 - 10
of 314
pro vyhledávání: '"Chen, Zitao"'
Autor:
Chen, Zitao, Pattabiraman, Karthik
The rise of deep learning (DL) has led to a surging demand for training data, which incentivizes the creators of DL models to trawl through the Internet for training materials. Meanwhile, users often have limited control over whether their data (e.g.
Externí odkaz:
http://arxiv.org/abs/2409.06280
Autor:
Chen, Zitao, Pattabiraman, Karthik
Modern machine learning (ML) ecosystems offer a surging number of ML frameworks and code repositories that can greatly facilitate the development of ML models. Today, even ordinary data holders who are not ML experts can apply off-the-shelf codebase
Externí odkaz:
http://arxiv.org/abs/2407.01919
Autor:
Li, Xiuzhen, Qin, Biao, Wang, Yaxian, Xi, Yue, Huang, Zhiheng, Zhao, Mengze, Peng, Yalin, Chen, Zitao, Pan, Zitian, Zhu, Jundong, Cui, Chenyang, Yang, Rong, Yang, Wei, Meng, Sheng, Shi, Dongxia, Bai, Xuedong, Liu, Can, Li, Na, Tang, Jianshi, Liu, Kaihui, Du, Luojun, Zhang, Guangyu
Ferroelectric materials with switchable electric polarization hold great promise for a plethora of emergent applications, such as post-Moore's law nanoelectronics, beyond-Boltzmann transistors, non-volatile memories, and above-bandgap photovoltaic de
Externí odkaz:
http://arxiv.org/abs/2401.16150
Autor:
Chen, Zitao, Pattabiraman, Karthik
Machine learning (ML) models are vulnerable to membership inference attacks (MIAs), which determine whether a given input is used for training the target model. While there have been many efforts to mitigate MIAs, they often suffer from limited priva
Externí odkaz:
http://arxiv.org/abs/2307.01610
Autor:
Yang, Xi, Chen, Zitao, zhou, Dongqing, Xiong, Xiaoqiang, Jing, Xiaodong, Zhao, Tongyun, Gong, Huayang, Shen, Baogen
Publikováno v:
In Ceramics International 15 September 2024 50(18) Part A:32465-32476
Publikováno v:
In Knowledge-Based Systems 4 November 2024 303
Publikováno v:
In Neural Networks November 2024 179
Adversarial patch attacks create adversarial examples by injecting arbitrary distortions within a bounded region of the input to fool deep neural networks (DNNs). These attacks are robust (i.e., physically-realizable) and universally malicious, and h
Externí odkaz:
http://arxiv.org/abs/2108.05075
Autor:
Jing, Xiaodong, Chen, Zitao, Zhao, Qianqian, Li, Zuoguang, Xiong, Xiaoqiang, Yang, Xi, Wang, Qun, Huang, Hai, Jiang, Hualiang, Zhao, Tongyun, Gong, Huayang
Publikováno v:
In Materials Today Chemistry July 2024 39
Publikováno v:
In Construction and Building Materials 28 June 2024 433