Zobrazeno 1 - 10
of 357
pro vyhledávání: '"khalil issa"'
The capability of generating high-quality source code using large language models (LLMs) reduces software development time and costs. However, they often introduce security vulnerabilities due to training on insecure open-source data. This highlights
Externí odkaz:
http://arxiv.org/abs/2409.12699
Autor:
Ton, Khiem, Nguyen, Nhi, Nazzal, Mahmoud, Khreishah, Abdallah, Borcea, Cristian, Phan, NhatHai, Jin, Ruoming, Khalil, Issa, Shen, Yelong
This paper introduces SGCode, a flexible prompt-optimizing system to generate secure code with large language models (LLMs). SGCode integrates recent prompt-optimization approaches with LLMs in a unified system accessible through front-end and back-e
Externí odkaz:
http://arxiv.org/abs/2409.07368
Autor:
Khan, Naseem, Ahmad, Kashif, Tamimi, Aref Al, Alani, Mohammed M., Bermak, Amine, Khalil, Issa
Industry 5.0, which focuses on human and Artificial Intelligence (AI) collaboration for performing different tasks in manufacturing, involves a higher number of robots, Internet of Things (IoTs) devices and interconnections, Augmented/Virtual Reality
Externí odkaz:
http://arxiv.org/abs/2408.03335
Malicious domain detection (MDD) is an open security challenge that aims to detect if an Internet domain is associated with cyber-attacks. Among many approaches to this problem, graph neural networks (GNNs) are deemed highly effective. GNN-based MDD
Externí odkaz:
http://arxiv.org/abs/2308.11754
This paper introduces FairDP, a novel mechanism designed to achieve certified fairness with differential privacy (DP). FairDP independently trains models for distinct individual groups, using group-specific clipping terms to assess and bound the disp
Externí odkaz:
http://arxiv.org/abs/2305.16474
Autor:
Tran, Khang, Lai, Phung, Phan, NhatHai, Khalil, Issa, Ma, Yao, Khreishah, Abdallah, Thai, My, Wu, Xintao
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data. To prevent privacy leakages in GNNs, we propose a novel heterogeneo
Externí odkaz:
http://arxiv.org/abs/2211.05766
Autor:
Chawla, Sanjay, Nakov, Preslav, Ali, Ahmed, Hall, Wendy, Khalil, Issa, Ma, Xiaosong, Sencar, Husrev Taha, Weber, Ingmar, Wooldridge, Michael, Yu, Ting
It is ten years since neural networks made their spectacular comeback. Prompted by this anniversary, we take a holistic perspective on Artificial Intelligence (AI). Supervised Learning for cognitive tasks is effectively solved - provided we have enou
Externí odkaz:
http://arxiv.org/abs/2210.01797
Trojan backdoor is a poisoning attack against Neural Network (NN) classifiers in which adversaries try to exploit the (highly desirable) model reuse property to implant Trojans into model parameters for backdoor breaches through a poisoned training p
Externí odkaz:
http://arxiv.org/abs/2209.01721
Enterprise networks are one of the major targets for cyber attacks due to the vast amount of sensitive and valuable data they contain. A common approach to detecting attacks in the enterprise environment relies on modeling the behavior of users and s
Externí odkaz:
http://arxiv.org/abs/2206.05679
VirusTotal (VT) provides aggregated threat intelligence on various entities including URLs, IP addresses, and binaries. It is widely used by researchers and practitioners to collect ground truth and evaluate the maliciousness of entities. In this wor
Externí odkaz:
http://arxiv.org/abs/2205.13155