Zobrazeno 1 - 10
of 226
pro vyhledávání: '"GOPINATH, DIVYA"'
As deep neural networks (DNNs) are increasingly used in safety-critical applications, there is a growing concern for their reliability. Even highly trained, high-performant networks are not 100% accurate. However, it is very difficult to predict thei
Externí odkaz:
http://arxiv.org/abs/2407.08730
Autor:
Mangal, Ravi, Narodytska, Nina, Gopinath, Divya, Hu, Boyue Caroline, Roy, Anirban, Jha, Susmit, Pasareanu, Corina
The analysis of vision-based deep neural networks (DNNs) is highly desirable but it is very challenging due to the difficulty of expressing formal specifications for vision tasks and the lack of efficient verification procedures. In this paper, we pr
Externí odkaz:
http://arxiv.org/abs/2403.19837
Providing safety guarantees for autonomous systems is difficult as these systems operate in complex environments that require the use of learning-enabled components, such as deep neural networks (DNNs) for visual perception. DNNs are hard to analyze
Externí odkaz:
http://arxiv.org/abs/2305.18372
Autor:
Pasareanu, Corina S., Mangal, Ravi, Gopinath, Divya, Yaman, Sinem Getir, Imrie, Calum, Calinescu, Radu, Yu, Huafeng
Deep neural networks (DNNs) are increasingly used in safety-critical autonomous systems as perception components processing high-dimensional image data. Formal analysis of these systems is particularly challenging due to the complexity of the percept
Externí odkaz:
http://arxiv.org/abs/2302.04634
Autor:
Usman, Muhammad, Sun, Youcheng, Gopinath, Divya, Dange, Rishi, Manolache, Luca, Pasareanu, Corina S.
Deep neural network (DNN) models, including those used in safety-critical domains, need to be thoroughly tested to ensure that they can reliably perform well in different scenarios. In this article, we provide an overview of structural coverage metri
Externí odkaz:
http://arxiv.org/abs/2208.03407
Neural networks are successfully used in a variety of applications, many of them having safety and security concerns. As a result researchers have proposed formal verification techniques for verifying neural network properties. While previous efforts
Externí odkaz:
http://arxiv.org/abs/2205.03894
MRE11 as a plausible biomarker and prognostic bioindicator for head and neck squamous cell carcinoma
Publikováno v:
In Journal of Stomatology oral and Maxillofacial Surgery October 2024 125(5) Supplement 2
We study backdoor poisoning attacks against image classification networks, whereby an attacker inserts a trigger into a subset of the training data, in such a way that at test time, this trigger causes the classifier to predict some target class. %Th
Externí odkaz:
http://arxiv.org/abs/2202.01179
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 163-175).
Ele
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 163-175).
Ele
Externí odkaz:
https://hdl.handle.net/1721.1/129149
Publikováno v:
EPTCS 348, 2021, pp. 92-100
The efficacy of machine learning models is typically determined by computing their accuracy on test data sets. However, this may often be misleading, since the test data may not be representative of the problem that is being studied. With QuantifyML
Externí odkaz:
http://arxiv.org/abs/2110.12588