论文标题

随机平滑的隐藏成本

Hidden Cost of Randomized Smoothing

论文作者

Mohapatra, Jeet, Ko, Ching-Yun, Tsui-Wei, Weng, Liu, Sijia, Chen, Pin-Yu, Daniel, Luca

论文摘要

现代机器学习模型的脆弱性吸引了学术界和公众的大量关注。尽管巨大的利益是在制定对抗性攻击方面是一种衡量神经网络的鲁棒性,或者通过保证而设计了最坏的分析鲁棒性验证,但很少有方法可以同时享受可伸缩性和鲁棒性保证。作为这些尝试的替代方案,随机平滑采用了不同的预测规则,该规则可以启用统计鲁棒性参数,这些参数易于扩展到大型网络。但是,在本文中,我们指出了当前随机平滑工作流的副作用。具体而言,我们表达并证明了两个主要点:1)平滑分类器的决策边界将缩小,从而导致班级准确性差异; 2)在训练过程中应用降噪功能并不一定能解决由于学习目标不一致而缩小的问题。

The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public. While immense interests were in either crafting adversarial attacks as a way to measure the robustness of neural networks or devising worst-case analytical robustness verification with guarantees, few methods could enjoy both scalability and robustness guarantees at the same time. As an alternative to these attempts, randomized smoothing adopts a different prediction rule that enables statistical robustness arguments which easily scale to large networks. However, in this paper, we point out the side effects of current randomized smoothing workflows. Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源