Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Charalambous, Yiannis"'
The next generation of AI systems requires strong safety guarantees. This report looks at the software implementation of neural networks and related memory safety properties, including NULL pointer deference, out-of-bound access, double-free, and mem
Externí odkaz:
http://arxiv.org/abs/2405.08848
Autor:
Braberman, Víctor A., Bonomo-Braberman, Flavia, Charalambous, Yiannis, Colonna, Juan G., Cordeiro, Lucas C., de Freitas, Rosiane
Prompting has become one of the main approaches to leverage emergent capabilities of Large Language Models [Brown et al. NeurIPS 2020, Wei et al. TMLR 2022, Wei et al. NeurIPS 2022]. Recently, researchers and practitioners have been "playing" with pr
Externí odkaz:
http://arxiv.org/abs/2404.09384
Autor:
Tihanyi, Norbert, Jain, Ridhi, Charalambous, Yiannis, Ferrag, Mohamed Amine, Sun, Youcheng, Cordeiro, Lucas C.
This paper introduces an innovative approach that combines Large Language Models (LLMs) with Formal Verification strategies for automatic software vulnerability repair. Initially, we employ Bounded Model Checking (BMC) to identify vulnerabilities and
Externí odkaz:
http://arxiv.org/abs/2305.14752
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Charalambous, Yiannis, Tihanyi, Norbert, Jain, Ridhi, Sun, Youcheng, Ferrag, Mohamed Amine, Cordeiro, Lucas C.
In this paper we present a novel solution that combines the capabilities of Large Language Models (LLMs) with Formal Verification strategies to verify and automatically repair software vulnerabilities. Initially, we employ Bounded Model Checking (BMC
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::af5a9030f52183fd9868883c54acda96