Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Isac, Omri"'
Autor:
Desmartin, Remi, Isac, Omri, Komendantskaya, Ekaterina, Stark, Kathrin, Passmore, Grant, Katz, Guy
Recent advances in the verification of deep neural networks (DNNs) have opened the way for broader usage of DNN verification technology in many application areas, including safety-critical ones. DNN verifiers are themselves complex programs that have
Externí odkaz:
http://arxiv.org/abs/2405.10611
Autor:
Casadio, Marco, Dinkar, Tanvi, Komendantskaya, Ekaterina, Arnaboldi, Luca, Daggitt, Matthew L., Isac, Omri, Katz, Guy, Rieser, Verena, Lemon, Oliver
Deep neural networks have exhibited substantial success in the field of Natural Language Processing and ensuring their safety and reliability is crucial: there are safety critical contexts where such models must be robust to variability or attack, an
Externí odkaz:
http://arxiv.org/abs/2403.10144
Autor:
Wu, Haoze, Isac, Omri, Zeljić, Aleksandar, Tagomori, Teruhiro, Daggitt, Matthew, Kokke, Wen, Refaeli, Idan, Amir, Guy, Julian, Kyle, Bassan, Shahaf, Huang, Pei, Lahav, Ori, Wu, Min, Zhang, Min, Komendantskaya, Ekaterina, Katz, Guy, Barrett, Clark
This paper serves as a comprehensive system description of version 2.0 of the Marabou framework for formal analysis of neural networks. We discuss the tool's architectural design and highlight the major features and components introduced since its in
Externí odkaz:
http://arxiv.org/abs/2401.14461
Autor:
Elboher, Yizhak, Elsaleh, Raya, Isac, Omri, Ducoffe, Mélanie, Galametz, Audrey, Povéda, Guillaume, Boumazouza, Ryma, Cohen, Noémie, Katz, Guy
As deep neural networks (DNNs) are becoming the prominent solution for many computational problems, the aviation industry seeks to explore their potential in alleviating pilot workload and in improving operational safety. However, the use of DNNs in
Externí odkaz:
http://arxiv.org/abs/2402.00035
Autor:
Desmartin, Remi, Isac, Omri, Passmore, Grant, Stark, Kathrin, Katz, Guy, Komendantskaya, Ekaterina
Recent developments in deep neural networks (DNNs) have led to their adoption in safety-critical systems, which in turn has heightened the need for guaranteeing their safety. These safety properties of DNNs can be proven using tools developed by the
Externí odkaz:
http://arxiv.org/abs/2307.06299
Deep neural networks (DNNs) are increasingly being deployed to perform safety-critical tasks. The opacity of DNNs, which prevents humans from reasoning about them, presents new safety and security challenges. To address these challenges, the verifica
Externí odkaz:
http://arxiv.org/abs/2305.06064
Autor:
Casadio, Marco, Arnaboldi, Luca, Daggitt, Matthew L., Isac, Omri, Dinkar, Tanvi, Kienitz, Daniel, Rieser, Verena, Komendantskaya, Ekaterina
Verification of machine learning models used in Natural Language Processing (NLP) is known to be a hard problem. In particular, many known neural network verification methods that work for computer vision and other numeric datasets do not work for NL
Externí odkaz:
http://arxiv.org/abs/2305.04003
Deep neural networks (DNNs) are increasingly being employed in safety-critical systems, and there is an urgent need to guarantee their correctness. Consequently, the verification community has devised multiple techniques and tools for verifying DNNs.
Externí odkaz:
http://arxiv.org/abs/2206.00512