Zobrazeno 1 - 10
of 72
pro vyhledávání: '"Maffeis, Sergio"'
Autor:
Foley, Myles, Maffeis, Sergio
REST APIs have become key components of web services. However, they often contain logic flaws resulting in server side errors or security vulnerabilities. HTTP requests are used as test cases to find and mitigate such issues. Existing methods to modi
Externí odkaz:
http://arxiv.org/abs/2412.15991
Autor:
Dolcetti, Greta, Arceri, Vincenzo, Iotti, Eleonora, Maffeis, Sergio, Cortesi, Agostino, Zaffanella, Enea
Large Language Models (LLMs) are one of the most promising developments in the field of artificial intelligence, and the software engineering community has readily noticed their potential role in the software development life-cycle. Developers routin
Externí odkaz:
http://arxiv.org/abs/2412.14841
With the introduction of the transformers architecture, LLMs have revolutionized the NLP field with ever more powerful models. Nevertheless, their development came up with several challenges. The exponential growth in computational power and reasonin
Externí odkaz:
http://arxiv.org/abs/2411.06835
Malicious adversaries can attack machine learning models to infer sensitive information or damage the system by launching a series of evasion attacks. Although various work addresses privacy and security concerns, they focus on individual defenses, b
Externí odkaz:
http://arxiv.org/abs/2401.10405
Machine learning models are being used in an increasing number of critical applications; thus, securing their integrity and ownership is critical. Recent studies observed that adversarial training and watermarking have a conflicting interaction. This
Externí odkaz:
http://arxiv.org/abs/2312.14260
Autor:
Highnam, Kate, Hanif, Zach, Van Vogt, Ellie, Parbhoo, Sonali, Maffeis, Sergio, Jennings, Nicholas R.
Intrusion research frequently collects data on attack techniques currently employed and their potential symptoms. This includes deploying honeypots, logging events from existing devices, employing a red team for a sample attack campaign, or simulatin
Externí odkaz:
http://arxiv.org/abs/2310.13224
Autor:
Hanif, Hazim, Maffeis, Sergio
Publikováno v:
International Joint Conference on Neural Networks (IJCNN), 2022
This paper presents VulBERTa, a deep learning approach to detect security vulnerabilities in source code. Our approach pre-trains a RoBERTa model with a custom tokenisation pipeline on real-world code from open-source C/C++ projects. The model learns
Externí odkaz:
http://arxiv.org/abs/2205.12424
In federated learning (FL), robust aggregation schemes have been developed to protect against malicious clients. Many robust aggregation schemes rely on certain numbers of benign clients being present in a quorum of workers. This can be hard to guara
Externí odkaz:
http://arxiv.org/abs/2112.10525
Publikováno v:
IEEE Conference on Dependable and Secure Computing (DSC), 2022
This paper presents DeepTective, a deep learning approach to detect vulnerabilities in PHP source code. Our approach implements a novel hybrid technique that combines Gated Recurrent Units and Graph Convolutional Networks to detect SQLi, XSS and OSCI
Externí odkaz:
http://arxiv.org/abs/2012.08835
Neural networks are increasingly used for intrusion detection on industrial control systems (ICS). With neural networks being vulnerable to adversarial examples, attackers who wish to cause damage to an ICS can attempt to hide their attacks from dete
Externí odkaz:
http://arxiv.org/abs/1911.04278