Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Deep Learning-Based Intrusion Detection With Adversaries

Published

Author(s)

Zheng Wang

Abstract

Deep neural networks have demonstrated their effectiveness in most machine learning tasks, with intrusion detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raises some concerns in applying deep neural networks in security-critical areas, such as intrusion detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning-based intrusion detection on the NSL-KDD data set. The vulnerabilities of neural networks employed by the intrusion detection systems are experimentally validated. The roles of individual features in generating adversarial examples are explored. Based on our findings, the feasibility and applicability of the attack methodologies are discussed.
Citation
IEEE Access Journal
Volume
6

Keywords

Intrusion detection, neural networks, classification algorithms, data security

Citation

Wang, Z. (2018), Deep Learning-Based Intrusion Detection With Adversaries, IEEE Access Journal, [online], https://doi.org/10.1109/ACCESS.2018.2854599, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=926377 (Accessed May 10, 2024)

Issues

If you have any questions about this publication or are having problems accessing it, please contact reflib@nist.gov.

Created July 9, 2018, Updated October 14, 2021