论文标题
使用防御性蒸馏和对抗性再培训对MMWave光束预测模型的对抗性安全缓解
The Adversarial Security Mitigations of mmWave Beamforming Prediction Models using Defensive Distillation and Adversarial Retraining
论文作者
论文摘要
用于光束预测的安全方案的设计对于下一代无线网络(5G,6G及以后)至关重要。但是,对于使用这些网络中的深度学习算法保护波束形成预测尚无共识。本文介绍了6G无线网络中使用深神经网络(DNNS)进行横梁预测深度学习的安全漏洞,这将波束成型的预测视为多出输出回归问题。据表明,初始DNN模型对对抗性攻击非常容易受到攻击,例如快速梯度符号方法(FGSM),基本迭代方法(BIM),预计梯度下降(PGD)和动量迭代方法(MIM),因为初始DNN模型对培训训练数据的对抗性较低的DNN模型敏感。这项研究还提供了两种缓解方法,例如对抗性训练和防御性蒸馏,用于针对人工智能(AI)基于毫米波(MMWAVE)波束图的基于人工智能(AI)的模型。此外,由于培训数据中的对抗示例,可以在数据损坏的情况下使用所提出的方案。实验结果表明,提出的方法有效地捍卫了DNN模型免受下一代无线网络中的对抗攻击。
The design of a security scheme for beamforming prediction is critical for next-generation wireless networks (5G, 6G, and beyond). However, there is no consensus about protecting the beamforming prediction using deep learning algorithms in these networks. This paper presents the security vulnerabilities in deep learning for beamforming prediction using deep neural networks (DNNs) in 6G wireless networks, which treats the beamforming prediction as a multi-output regression problem. It is indicated that the initial DNN model is vulnerable against adversarial attacks, such as Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), Projected Gradient Descent (PGD), and Momentum Iterative Method (MIM), because the initial DNN model is sensitive to the perturbations of the adversarial samples of the training data. This study also offers two mitigation methods, such as adversarial training and defensive distillation, for adversarial attacks against artificial intelligence (AI)-based models used in the millimeter-wave (mmWave) beamforming prediction. Furthermore, the proposed scheme can be used in situations where the data are corrupted due to the adversarial examples in the training data. Experimental results show that the proposed methods effectively defend the DNN models against adversarial attacks in next-generation wireless networks.