论文标题

建立强大模型的简单结构

A Simple Structure For Building A Robust Model

论文作者

Tan, Xiao, Gao, Jingbo, Li, Ruolin

论文摘要

由于深度学习应用程序,尤其是计算机视觉程序,越来越多地部署在我们的生活中,我们必须更加紧张地思考这些应用程序的安全性。一种有效的方法来提高深度学习模型的安全性是进行对抗性培训的,该模型可以与该模型构建的模型构建的rob。我们能够有意地使用该模型来构建一种模型。通过添加一个对抗性样本检测网络进行合作培训,训练有素的网络。同时,我们设计了一种新的数据采样策略,该策略结合了多种现有攻击,使该模型可以通过单个培训适应了许多不同的对抗攻击。我们进行了一些实验来测试基于CIFAR10数据集的该设计的有效性,结果表明,它可以发现对模型的鲁棒性具有一定程度的效果。 https://github.com/dowdyboy/simple_structure_for_robust_model。

As deep learning applications, especially programs of computer vision, are increasingly deployed in our lives, we have to think more urgently about the security of these applications.One effective way to improve the security of deep learning models is to perform adversarial training, which allows the model to be compatible with samples that are deliberately created for use in attacking the model.Based on this, we propose a simple architecture to build a model with a certain degree of robustness, which improves the robustness of the trained network by adding an adversarial sample detection network for cooperative training. At the same time, we design a new data sampling strategy that incorporates multiple existing attacks, allowing the model to adapt to many different adversarial attacks with a single training.We conducted some experiments to test the effectiveness of this design based on Cifar10 dataset, and the results indicate that it has some degree of positive effect on the robustness of the model.Our code could be found at https://github.com/dowdyboy/simple_structure_for_robust_model .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源