Zobrazeno 1 - 10
of 117
pro vyhledávání: '"PASTORE, FABRIZIO"'
In safety-critical systems (e.g., autonomous vehicles and robots), Deep Neural Networks (DNNs) are becoming a key component for computer vision tasks, particularly semantic segmentation. Further, since the DNN behavior cannot be assessed through code
Externí odkaz:
http://arxiv.org/abs/2406.13359
Mutation testing consists of generating test cases that detect faults injected into software (generating mutants) which its original test suite could not. By running such an augmented set of test cases, it may discover actual faults that may have gon
Externí odkaz:
http://arxiv.org/abs/2406.02398
Although the security testing of Web systems can be automated by generating crafted inputs, solutions to automate the test oracle, i.e., vulnerability detection, remain difficult to apply in practice. Specifically, though previous work has demonstrat
Externí odkaz:
http://arxiv.org/abs/2402.10773
Metamorphic testing (MT) has proven to be a successful solution to automating testing and addressing the oracle problem. However, it entails manually deriving metamorphic relations (MRs) and converting them into an executable form; these steps are ti
Externí odkaz:
http://arxiv.org/abs/2401.17019
Mutation testing can help reduce the risks of releasing faulty software. For such reason, it is a desired practice for the development of embedded software running in safety-critical cyber-physical systems (CPS). Unfortunately, state-of-the-art test
Externí odkaz:
http://arxiv.org/abs/2308.07949
Although App updates are frequent and software engineers would like to verify updated features only, automated testing techniques verify entire Apps and are thus wasting resources. We present Continuous Adaptation of Learned Models (CALM), an automat
Externí odkaz:
http://arxiv.org/abs/2308.05549
The adoption of deep neural networks (DNNs) in safety-critical contexts is often prevented by the lack of effective means to explain their results, especially when they are erroneous. In our previous work, we proposed a white-box approach (HUDD) and
Externí odkaz:
http://arxiv.org/abs/2301.13506
We present HUDD, a tool that supports safety analysis practices for systems enabled by Deep Neural Networks (DNNs) by automatically identifying the root causes for DNN errors and retraining the DNN. HUDD stands for Heatmap-based Unsupervised Debuggin
Externí odkaz:
http://arxiv.org/abs/2210.08356
Security testing aims at verifying that the software meets its security properties. In modern Web systems, however, this often entails the verification of the outputs generated when exercising the system with a very large set of inputs. Full automati
Externí odkaz:
http://arxiv.org/abs/2208.09505
When Deep Neural Networks (DNNs) are used in safety-critical systems, engineers should determine the safety risks associated with failures (i.e., erroneous outputs) observed during testing. For DNNs processing images, engineers visually inspect all f
Externí odkaz:
http://arxiv.org/abs/2204.00480