Zobrazeno 1 - 10
of 147
pro vyhledávání: '"surprise adequacy"'
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Surprise Adequacy (SA) is one of the emerging and most promising adequacy criteria for Deep Learning (DL) testing. As an adequacy criterion, it has been used to assess the strength of DL test suites. In addition, it has also been used to find inputs
Externí odkaz:
http://arxiv.org/abs/2103.05939
Reducing DNN Labelling Cost using Surprise Adequacy: An Industrial Case Study for Autonomous Driving
Deep Neural Networks (DNNs) are rapidly being adopted by the automotive industry, due to their impressive performance in tasks that are essential for autonomous driving. Object segmentation is one such task: its aim is to precisely locate boundaries
Externí odkaz:
http://arxiv.org/abs/2006.00894
Deep Learning (DL) systems are rapidly being adopted in safety and security critical domains, urgently calling for ways to test their correctness and robustness. Testing of DL systems has traditionally relied on manual collection and labelling of dat
Externí odkaz:
http://arxiv.org/abs/1808.08444
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
Applied Sciences, Vol 11, Iss 15, p 6826 (2021)
Facing the increasing quantity of AI models applications, especially in life- and property-related fields, it is crucial for designers to construct safety- and security-critical systems. As a major factor affecting the safety of AI models, corner cas
Externí odkaz:
https://doaj.org/article/8881d140ae6443d19136765c71a80940
Publikováno v:
ACM Transactions on Software Engineering and Methodology. 32:1-29
The rapid adoption of Deep Learning (DL) systems in safety critical domains such as medical imaging and autonomous driving urgently calls for ways to test their correctness and robustness. Borrowing from the concept of test adequacy in traditional so
Publikováno v:
Applied Sciences, Vol 11, Iss 6826, p 6826 (2021)
Applied Sciences
Volume 11
Issue 15
Applied Sciences
Volume 11
Issue 15
Facing the increasing quantity of AI models applications, especially in life- and property-related fields, it is crucial for designers to construct safety- and security-critical systems. As a major factor affecting the safety of AI models, corner cas
Publikováno v:
Proceedings of the 1st International Conference on AI Engineering: Software Engineering for AI.