论文标题

通过混淆的对抗扰动隐藏视觉信息

Hiding Visual Information via Obfuscating Adversarial Perturbations

论文作者

Su, Zhigang, Zhou, Dawei, Wangu, Nannan, Li, Decheng, Wang, Zhen, Gao, Xinbo

论文摘要

泄漏和视觉信息的滥用日益增加,引起了安全和隐私问题,从而促进了信息保护的发展。现有的基于对抗性扰动的方法主要集中于针对深度学习模型的去识别。但是,数据的固有视觉信息尚未得到很好的保护。在这项工作中,受到I型对抗性攻击的启发,我们提出了一种保护数据的视觉隐私的对抗性视觉信息隐藏方法。具体而言,该方法会产生混淆的对抗扰动,以掩盖数据的视觉信息。同时,它保留了模型正确预测的隐藏目标。此外,我们的方法不会修改应用模型的参数,这使其适合不同方案。关于识别和分类任务的实验结果表明,所提出的方法可以有效地隐藏视觉信息并几乎不会影响模型的性能。该代码在补充材料中可用。

Growing leakage and misuse of visual information raise security and privacy concerns, which promotes the development of information protection. Existing adversarial perturbations-based methods mainly focus on the de-identification against deep learning models. However, the inherent visual information of the data has not been well protected. In this work, inspired by the Type-I adversarial attack, we propose an adversarial visual information hiding method to protect the visual privacy of data. Specifically, the method generates obfuscating adversarial perturbations to obscure the visual information of the data. Meanwhile, it maintains the hidden objectives to be correctly predicted by models. In addition, our method does not modify the parameters of the applied model, which makes it flexible for different scenarios. Experimental results on the recognition and classification tasks demonstrate that the proposed method can effectively hide visual information and hardly affect the performances of models. The code is available in the supplementary material.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源