When Good Machine Learning Leads to Bad Security
Autor: | Mehmed Kantardzic, Tegjyot Singh Sethi |
---|---|
Rok vydání: | 2018 |
Předmět: |
Reverse engineering
Computer science business.industry Scale (chemistry) Big data Rule-based system 02 engineering and technology General Medicine Adversary Machine learning computer.software_genre Term (time) Domain (software engineering) Adversarial system 020204 information systems 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence business computer |
Zdroj: | Ubiquity. 2018:1-14 |
ISSN: | 1530-2180 |
DOI: | 10.1145/3158346 |
Popis: | While machine learning has proven to be promising in several application domains, our understanding of its behavior and limitations is still in its nascent stages. One such domain is that of cybersecurity, where machine learning models are replacing traditional rule based systems, owing to their ability to generalize and deal with large scale attacks which are not seen before. However, the naive transfer of machine learning principles to the domain of security needs to be taken with caution. Machine learning was not designed with security in mind and as such is prone to adversarial manipulation and reverse engineering. While most data based learning models rely on a static assumption of the world, the security landscape is one that is especially dynamic, with an ongoing never ending arms race between the system designer and the attackers. Any solution designed for such a domain needs to take into account an active adversary and needs to evolve over time, in the face of emerging threats. We term this as the "Dynamic Adversarial Mining" problem, and this paper provides motivation and foundation for this new interdisciplinary area of research, at the crossroads of machine learning, cybersecurity, and streaming data mining. |
Databáze: | OpenAIRE |
Externí odkaz: |