Zobrazeno 1 - 10
of 110
pro vyhledávání: '"Yin, Huilin"'
Semantic segmentation is a significant perception task in autonomous driving. It suffers from the risks of adversarial examples. In the past few years, deep learning has gradually transitioned from convolutional neural network (CNN) models with a rel
Externí odkaz:
http://arxiv.org/abs/2408.09839
Trajectory prediction is critical for the safe planning and navigation of automated vehicles. The trajectory prediction models based on the neural networks are vulnerable to adversarial attacks. Previous attack methods have achieved high attack succe
Externí odkaz:
http://arxiv.org/abs/2404.12612
With the flourishing development of intelligent warehousing systems, the technology of Automated Guided Vehicle (AGV) has experienced rapid growth. Within intelligent warehousing environments, AGV is required to safely and rapidly plan an optimal pat
Externí odkaz:
http://arxiv.org/abs/2404.12594
Convolutional Neural Networks (CNNs) have exhibited great performance in discriminative feature learning for complex visual tasks. Besides discrimination power, interpretability is another important yet under-explored property for CNNs. One difficult
Externí odkaz:
http://arxiv.org/abs/2312.12068
Graph databases have grown in popularity in recent years as they are able to efficiently store and query complex relationships between data. Incidentally, navigation data and road networks can be processed, sampled or modified efficiently when stored
Externí odkaz:
http://arxiv.org/abs/2306.07084
The information bottleneck (IB) method is a feasible defense solution against adversarial attacks in deep learning. However, this method suffers from the spurious correlation, which leads to the limitation of its further improvement of adversarial ro
Externí odkaz:
http://arxiv.org/abs/2210.14229
Autor:
Yan, Jun, Yin, Huilin, Deng, Xiaoyang, Zhao, Ziming, Ge, Wancheng, Zhang, Hao, Rigoll, Gerhard
Adversarial training methods are state-of-the-art (SOTA) empirical defense methods against adversarial examples. Many regularization methods have been proven to be effective with the combination of adversarial training. Nevertheless, such regularizat
Externí odkaz:
http://arxiv.org/abs/2206.03727
Autor:
Xu, Zhengzheng, Yin, Huilin, Jiang, Yue, Jiang, Zhonghao, Liu, Yan, Yang, Chun Cheng, Wang, Guoyong
Publikováno v:
In Journal of Materials Research and Technology November-December 2024 33:7586-7595
Publikováno v:
Autonomous Intelligent Systems, 2(1) (2022)
Autonomous driving has attracted significant research interests in the past two decades as it offers many potential benefits, including releasing drivers from exhausting driving and mitigating traffic congestion, among others. Despite promising progr
Externí odkaz:
http://arxiv.org/abs/2111.06318
Deep Neural Networks (DNNs) are vulnerable to adversarial examples which would inveigle neural networks to make prediction errors with small perturbations on the input images. Researchers have been devoted to promoting the research on the universal a
Externí odkaz:
http://arxiv.org/abs/2108.04409