Zobrazeno 1 - 10
of 317
pro vyhledávání: '"Chen Hongrui"'
Autor:
Chen, Hongrui, Joglekar, Aditya, Rubinstein, Zack, Schmerl, Bradley, Fedder, Gary, de Nijs, Jan, Garlan, David, Smith, Stephen, Kara, Levent Burak
Advances in CAD and CAM have enabled engineers and design teams to digitally design parts with unprecedented ease. Software solutions now come with a range of modules for optimizing designs for performance requirements, generating instructions for ma
Externí odkaz:
http://arxiv.org/abs/2409.03089
Autor:
Wu, Baoyuan, Chen, Hongrui, Zhang, Mingda, Zhu, Zihao, Wei, Shaokui, Yuan, Danni, Zhu, Mingli, Wang, Ruotong, Liu, Li, Shen, Chao
As an emerging approach to explore the vulnerability of deep neural networks (DNNs), backdoor learning has attracted increasing interest in recent years, and many seminal backdoor attack and defense algorithms are being developed successively or conc
Externí odkaz:
http://arxiv.org/abs/2407.19845
We study a qDRIFT-type randomized method to simulate Lindblad dynamics by decomposing its generator into an ensemble of Lindbladians, $\mathcal{L} = \sum_{a \in \mathcal{A}} \mathcal{L}_a$, where each $\mathcal{L}_a$ involves only a single jump opera
Externí odkaz:
http://arxiv.org/abs/2407.06594
A long-standing challenge is designing multi-scale structures with good connectivity between cells while optimizing each cell to reach close to the theoretical performance limit. We propose a new method for direct multi-scale topology optimization us
Externí odkaz:
http://arxiv.org/abs/2404.08708
Autor:
Chen, Hongrui, Ying, Lexing
Diffusion models have achieved huge empirical success in data generation tasks. Recently, some efforts have been made to adapt the framework of diffusion models to discrete state space, providing a more natural approach for modeling intrinsically dis
Externí odkaz:
http://arxiv.org/abs/2402.08095
Adversarial examples are well-known tools to evaluate the vulnerability of deep neural networks (DNNs). Although lots of adversarial attack algorithms have been developed, it is still challenging in the practical scenario that the model's parameters
Externí odkaz:
http://arxiv.org/abs/2312.16979
Autor:
Wu, Baoyuan, Wei, Shaokui, Zhu, Mingli, Zheng, Meixi, Zhu, Zihao, Zhang, Mingda, Chen, Hongrui, Yuan, Danni, Liu, Li, Liu, Qingshan
Adversarial phenomenon has been widely observed in machine learning (ML) systems, especially in those using deep neural networks, describing that ML systems may produce inconsistent and incomprehensible predictions with humans at some particular case
Externí odkaz:
http://arxiv.org/abs/2312.08890
Autor:
Liu, Haolin, Gobert, Christian, Ferguson, Kevin, Abranovic, Brandon, Chen, Hongrui, Beuth, Jack L., Rollett, Anthony D., Kara, Levent Burak
With a growing demand for high-quality fabrication, the interest in real-time process and defect monitoring of laser powder bed fusion (LPBF) has increased, leading manufacturers to incorporate a variety of online sensing methods including acoustic s
Externí odkaz:
http://arxiv.org/abs/2310.05289
In this work, we analyze the learnability of reproducing kernel Hilbert spaces (RKHS) under the $L^\infty$ norm, which is critical for understanding the performance of kernel methods and random feature models in safety- and security-critical applicat
Externí odkaz:
http://arxiv.org/abs/2306.02833
Deep neural networks (DNNs) can be manipulated to exhibit specific behaviors when exposed to specific trigger patterns, without affecting their performance on benign samples, dubbed \textit{backdoor attack}. Currently, implementing backdoor attacks i
Externí odkaz:
http://arxiv.org/abs/2306.00816