Zobrazeno 1 - 10
of 38
pro vyhledávání: '"Manino, Edoardo"'
Training large language models (LLMs) requires a substantial investment of time and money. To get a good return on investment, the developers spend considerable effort ensuring that the model never produces harmful and offensive outputs. However, bad
Externí odkaz:
http://arxiv.org/abs/2407.11059
Autor:
Menezes, Rafael Sá, Manino, Edoardo, Shmarov, Fedor, Aldughaim, Mohannad, de Freitas, Rosiane, Cordeiro, Lucas C.
Bounded Model Checking (BMC) is a widely used software verification technique. Despite its successes, the technique has several limiting factors, from state-space explosion to lack of completeness. Over the years, interval analysis has repeatedly bee
Externí odkaz:
http://arxiv.org/abs/2406.15281
Realm Management Monitor (RMM) is an essential firmware component within the recent Arm Confidential Computing Architecture (Arm CCA). Previous work applies formal techniques to verify the specification and prototype reference implementation of RMM.
Externí odkaz:
http://arxiv.org/abs/2406.04375
The next generation of AI systems requires strong safety guarantees. This report looks at the software implementation of neural networks and related memory safety properties, including NULL pointer deference, out-of-bound access, double-free, and mem
Externí odkaz:
http://arxiv.org/abs/2405.08848
Autor:
Menezes, Rafael, Aldughaim, Mohannad, Farias, Bruno, Li, Xianzhiyu, Manino, Edoardo, Shmarov, Fedor, Song, Kunjian, Brauße, Franz, Gadelha, Mikhail R., Tihanyi, Norbert, Korovin, Konstantin, Cordeiro, Lucas C.
ESBMC implements many state-of-the-art techniques for model checking. We report on new and improved features that allow us to obtain verification results for previously unsupported programs and properties. ESBMC employs a new static interval analysis
Externí odkaz:
http://arxiv.org/abs/2312.14746
Safety-critical systems with neural network components require strong guarantees. While existing neural network verification techniques have shown great progress towards this goal, they cannot prove the absence of software faults in the network imple
Externí odkaz:
http://arxiv.org/abs/2309.03617
We describe and evaluate LF-checker, a metaverifier tool based on machine learning. It extracts multiple features of the program under test and predicts the optimal configuration (flags) of a bounded model checker with a decision tree. Our current wo
Externí odkaz:
http://arxiv.org/abs/2301.09142
Neural networks are a powerful class of non-linear functions. However, their black-box nature makes it difficult to explain their behaviour and certify their safety. Abstraction techniques address this challenge by transforming the neural network int
Externí odkaz:
http://arxiv.org/abs/2210.12054
In recent years, distributional language representation models have demonstrated great practical success. At the same time, the need for interpretability has elicited questions on their intrinsic properties and capabilities. Crucially, distributional
Externí odkaz:
http://arxiv.org/abs/2212.04310
Neural networks are essential components of learning-based software systems. However, their high compute, memory, and power requirements make using them in low resources domains challenging. For this reason, neural networks are often quantized before
Externí odkaz:
http://arxiv.org/abs/2207.04231