Zobrazeno 1 - 10
of 24
pro vyhledávání: '"Ma, Wanlun"'
Autor:
Deng, Zehang, Guo, Yongjian, Han, Changzhou, Ma, Wanlun, Xiong, Junwu, Wen, Sheng, Xiang, Yang
An Artificial Intelligence (AI) agent is a software entity that autonomously performs tasks or makes decisions based on pre-defined objectives and data inputs. AI agents, capable of perceiving user inputs, reasoning and planning tasks, and executing
Externí odkaz:
http://arxiv.org/abs/2406.02630
Inferring geographic locations via social posts is essential for many practical location-based applications such as product marketing, point-of-interest recommendation, and infector tracking for COVID-19. Unlike image-based location retrieval or soci
Externí odkaz:
http://arxiv.org/abs/2306.07935
AI-powered programming language generation (PLG) models have gained increasing attention due to their ability to generate source code of programs in a few seconds with a plain program description. Despite their remarkable performance, many concerns a
Externí odkaz:
http://arxiv.org/abs/2305.12747
Deep Neural Networks (DNNs) are susceptible to backdoor attacks during training. The model corrupted in this way functions normally, but when triggered by certain patterns in the input, produces a predefined target label. Existing defenses usually re
Externí odkaz:
http://arxiv.org/abs/2209.11715
Computer users are generally faced with difficulties in making correct security decisions. While an increasingly fewer number of people are trying or willing to take formal security training, online sources including news, security blogs, and website
Externí odkaz:
http://arxiv.org/abs/2006.14765
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
In Computer Science Review November 2022 46
Publikováno v:
IEEE Transactions on Dependable and Secure Computing; November 2024, Vol. 21 Issue: 6 p5526-5537, 12p
Publikováno v:
Proceedings 2023 Network and Distributed System Security Symposium.
Deep Neural Networks (DNNs) are susceptible to backdoor attacks during training. The model corrupted in this way functions normally, but when triggered by certain patterns in the input, produces a predefined target label. Existing defenses usually re
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.