论文标题
部分可观测时空混沌系统的无模型预测
Robustness Evaluation of Deep Unsupervised Learning Algorithms for Intrusion Detection Systems
论文作者
论文摘要
最近,在各个领域都观察到了深度学习的进步,包括计算机视觉,自然语言处理和网络安全。机器学习(ML)证明了其作为基于异常检测的入侵检测系统的潜在工具,可以构建安全的计算机网络。越来越多的ML方法比启发式方法的网络安全方法广泛采用,因为它们直接从数据中学习。数据对于ML系统的开发至关重要,并且成为攻击者的潜在目标。基本上,数据中毒或污染是通过数据欺骗ML模型的最常见技术之一。本文评估了六种最近的深度学习算法的鲁棒性,用于污染数据的入侵检测。我们的实验表明,本研究中使用的最新算法对数据污染敏感,并揭示了开发新型模型时自卫对数据扰动的重要性,尤其是对于入侵检测系统。
Recently, advances in deep learning have been observed in various fields, including computer vision, natural language processing, and cybersecurity. Machine learning (ML) has demonstrated its ability as a potential tool for anomaly detection-based intrusion detection systems to build secure computer networks. Increasingly, ML approaches are widely adopted than heuristic approaches for cybersecurity because they learn directly from data. Data is critical for the development of ML systems, and becomes potential targets for attackers. Basically, data poisoning or contamination is one of the most common techniques used to fool ML models through data. This paper evaluates the robustness of six recent deep learning algorithms for intrusion detection on contaminated data. Our experiments suggest that the state-of-the-art algorithms used in this study are sensitive to data contamination and reveal the importance of self-defense against data perturbation when developing novel models, especially for intrusion detection systems.