Zobrazeno 1 - 10
of 75
pro vyhledávání: '"Amir, Guy"'
The ability to interpret Machine Learning (ML) models is becoming increasingly essential. However, despite significant progress in the field, there remains a lack of rigorous characterization regarding the innate interpretability of different models.
Externí odkaz:
http://arxiv.org/abs/2408.03915
Autor:
Mandal, Udayan, Amir, Guy, Wu, Haoze, Daukantas, Ieva, Newell, Fletcher Lee, Ravaioli, Umberto, Meng, Baoluo, Durling, Michael, Hobbs, Kerianne, Ganai, Milan, Shim, Tobey, Katz, Guy, Barrett, Clark
In recent years, deep reinforcement learning (DRL) approaches have generated highly successful controllers for a myriad of complex domains. However, the opaque nature of these models limits their applicability in aerospace systems and safety-critical
Externí odkaz:
http://arxiv.org/abs/2407.07088
In recent years, Deep Reinforcement Learning (DRL) has emerged as an effective approach to solving real-world tasks. However, despite their successes, DRL-based policies suffer from poor reliability, which limits their deployment in safety-critical d
Externí odkaz:
http://arxiv.org/abs/2406.06507
In recent years, Machine Learning (ML) models have achieved remarkable success in various domains. However, these models also tend to demonstrate unsafe behaviors, precluding their deployment in safety-critical systems. To cope with this issue, ample
Externí odkaz:
http://arxiv.org/abs/2406.04184
The local and global interpretability of various ML models has been studied extensively in recent years. However, despite significant progress in the field, many known results remain informal or lack sufficient mathematical rigor. We propose a framew
Externí odkaz:
http://arxiv.org/abs/2406.02981
Deep neural networks (DNNs) play a crucial role in the field of machine learning, demonstrating state-of-the-art performance across various application domains. However, despite their success, DNN-based models may occasionally exhibit challenges with
Externí odkaz:
http://arxiv.org/abs/2406.02024
Autor:
Mandal, Udayan, Amir, Guy, Wu, Haoze, Daukantas, Ieva, Newell, Fletcher Lee, Ravaioli, Umberto J., Meng, Baoluo, Durling, Michael, Ganai, Milan, Shim, Tobey, Katz, Guy, Barrett, Clark
Deep reinforcement learning (DRL) is a powerful machine learning paradigm for generating agents that control autonomous systems. However, the ``black box'' nature of DRL agents limits their deployment in real-world safety-critical applications. A pro
Externí odkaz:
http://arxiv.org/abs/2405.14058
In recent years, Deep Reinforcement Learning (DRL) has become a popular paradigm in machine learning due to its successful applications to real-world and complex systems. However, even the state-of-the-art DRL models have been shown to suffer from re
Externí odkaz:
http://arxiv.org/abs/2402.05284
Autor:
Wu, Haoze, Isac, Omri, Zeljić, Aleksandar, Tagomori, Teruhiro, Daggitt, Matthew, Kokke, Wen, Refaeli, Idan, Amir, Guy, Julian, Kyle, Bassan, Shahaf, Huang, Pei, Lahav, Ori, Wu, Min, Zhang, Min, Komendantskaya, Ekaterina, Katz, Guy, Barrett, Clark
This paper serves as a comprehensive system description of version 2.0 of the Marabou framework for formal analysis of neural networks. We discuss the tool's architectural design and highlight the major features and components introduced since its in
Externí odkaz:
http://arxiv.org/abs/2401.14461
Deep neural networks (DNNs) are increasingly being used as controllers in reactive systems. However, DNNs are highly opaque, which renders it difficult to explain and justify their actions. To mitigate this issue, there has been a surge of interest i
Externí odkaz:
http://arxiv.org/abs/2308.00143