Zobrazeno 1 - 10
of 152
pro vyhledávání: '"Tran Hoang, Dung"'
This paper proposes a transition system abstraction framework for neural network dynamical system models to enhance the model interpretability, with applications to complex dynamical systems such as human behavior learning and verification. To begin
Externí odkaz:
http://arxiv.org/abs/2402.11739
Autor:
Bak, Stanley, Tran, Hoang-Dung
ACAS Xu is an air-to-air collision avoidance system designed for unmanned aircraft that issues horizontal turn advisories to avoid an intruder aircraft. Due the use of a large lookup table in the design, a neural network compression of the policy was
Externí odkaz:
http://arxiv.org/abs/2201.06626
Autor:
Yang, Xiaodong, Yamaguchi, Tom, Tran, Hoang-Dung, Hoxha, Bardh, Johnson, Taylor T, Prokhorov, Danil
Safety is a critical concern for the next generation of autonomy that is likely to rely heavily on deep neural networks for perception and control. Formally verifying the safety and robustness of well-trained DNNs and learning-enabled systems under a
Externí odkaz:
http://arxiv.org/abs/2108.04214
Autor:
Yang, Xiaodong, Yamaguchi, Tomoya, Tran, Hoang-Dung, Hoxha, Bardh, Johnson, Taylor T, Prokhorov, Danil
Deep convolutional neural networks have been widely employed as an effective technique to handle complex and practical problems. However, one of the fundamental problems is the lack of formal methods to analyze their behavior. To address this challen
Externí odkaz:
http://arxiv.org/abs/2106.12074
The vulnerability of artificial intelligence (AI) and machine learning (ML) against adversarial disturbances and attacks significantly restricts their applicability in safety-critical systems including cyber-physical systems (CPS) equipped with neura
Externí odkaz:
http://arxiv.org/abs/2004.12273
Convolutional Neural Networks (CNN) have redefined the state-of-the-art in many real-world applications, such as facial recognition, image classification, human pose estimation, and semantic segmentation. Despite their success, CNNs are vulnerable to
Externí odkaz:
http://arxiv.org/abs/2004.05511
Autor:
Tran, Hoang-Dung, Yang, Xiaodong, Lopez, Diego Manzanas, Musau, Patrick, Nguyen, Luan Viet, Xiang, Weiming, Bak, Stanley, Johnson, Taylor T.
This paper presents the Neural Network Verification (NNV) software tool, a set-based verification framework for deep neural networks (DNNs) and learning-enabled cyber-physical systems (CPS). The crux of NNV is a collection of reachability algorithms
Externí odkaz:
http://arxiv.org/abs/2004.05519
Deep neural networks have been widely applied as an effective approach to handle complex and practical problems. However, one of the most fundamental open problems is the lack of formal methods to analyze the safety of their behaviors. To address thi
Externí odkaz:
http://arxiv.org/abs/2003.01226
Safety-critical distributed cyber-physical systems (CPSs) have been found in a wide range of applications. Notably, they have displayed a great deal of utility in intelligent transportation, where autonomous vehicles communicate and cooperate with ea
Externí odkaz:
http://arxiv.org/abs/1909.09087
This paper presents a specification-guided safety verification method for feedforward neural networks with general activation functions. As such feedforward networks are memoryless, they can be abstractly represented as mathematical functions, and th
Externí odkaz:
http://arxiv.org/abs/1812.06161