论文标题

使用NIDSGAN生成实用的对抗网络流量流

Generating Practical Adversarial Network Traffic Flows Using NIDSGAN

论文作者

Zolbayar, Bolor-Erdene, Sheatsley, Ryan, McDaniel, Patrick, Weisman, Michael J., Zhu, Sencun, Zhu, Shitong, Krishnamurthy, Srikanth

论文摘要

网络入侵检测系统(NIDS)是计算机网络和其中主机的重要辩护。如今,机器学习(ML)主要是NIDS决策的基础,其中对模型进行了调整以减少错误警报,提高检测率并检测已知和未知攻击。同时,已经发现ML模型容易受到破坏下游任务的对抗示例的影响。在这项工作中,我们询问了一个实用的问题,即是否可以通过精心设计的对抗流量来规避现实世界中基于ML的NID,如果是这样,如何创建它们。我们开发了基于生成的对抗网络(GAN)基于攻击算法Nidsgan,并评估其对基于ML的NID的有效性。产生对抗性网络流量流出了两个主要挑战:(1)网络功能必须遵守域的约束(即表示现实的网络行为),并且(2)对手必须学习目标NID的决策行为,而不会知道其模型内部设备(例如,建筑和元组)和培训数据。尽管面临这些挑战,但Nidsgan算法仍会产生高度现实的对抗性流量,以逃避基于ML的NID。我们评估了针对白盒,黑框和限制性黑色威胁模型中两个基于DNN的NID的攻击算法,并实现了平均99%,85%和70%的成功率。我们还表明,我们的攻击算法可以基于经典ML模型(包括逻辑回归,SVM,决策树和KNN)逃避NID,平均成功率为70%。我们的结果表明,在没有仔细防御策略的情况下,针对对抗性流的谨慎防御策略可能会导致未来的妥协。

Network intrusion detection systems (NIDS) are an essential defense for computer networks and the hosts within them. Machine learning (ML) nowadays predominantly serves as the basis for NIDS decision making, where models are tuned to reduce false alarms, increase detection rates, and detect known and unknown attacks. At the same time, ML models have been found to be vulnerable to adversarial examples that undermine the downstream task. In this work, we ask the practical question of whether real-world ML-based NIDS can be circumvented by crafted adversarial flows, and if so, how can they be created. We develop the generative adversarial network (GAN)-based attack algorithm NIDSGAN and evaluate its effectiveness against realistic ML-based NIDS. Two main challenges arise for generating adversarial network traffic flows: (1) the network features must obey the constraints of the domain (i.e., represent realistic network behavior), and (2) the adversary must learn the decision behavior of the target NIDS without knowing its model internals (e.g., architecture and meta-parameters) and training data. Despite these challenges, the NIDSGAN algorithm generates highly realistic adversarial traffic flows that evade ML-based NIDS. We evaluate our attack algorithm against two state-of-the-art DNN-based NIDS in whitebox, blackbox, and restricted-blackbox threat models and achieve success rates which are on average 99%, 85%, and 70%, respectively. We also show that our attack algorithm can evade NIDS based on classical ML models including logistic regression, SVM, decision trees and KNNs, with a success rate of 70% on average. Our results demonstrate that deploying ML-based NIDS without careful defensive strategies against adversarial flows may (and arguably likely will) lead to future compromises.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源