论文标题

增强对抗性的鲁棒性,以进行深度度量学习

Enhancing Adversarial Robustness for Deep Metric Learning

论文作者

Zhou, Mo, Patel, Vishal M.

论文摘要

由于对抗性脆弱性的安全含义,必须改善深度度量学习模型的对抗性鲁棒性。为了避免由于过多的例子而导致的模型崩溃,现有的防御措施否定了最小的对抗性训练,而是从弱对手效率低下学习。相反,根据较硬的良性三重态或伪硬度功能,我们提议硬度操纵有效地将训练三胞胎拧至针对对抗训练的特定硬度。它是灵活的,因为定期训练和最小训练是其边界案例。此外,逐渐的对手,提出了一个伪硬度功能的家族,以逐渐提高训练期间指定的硬度水平,以在性能和稳健性之间取得更好的平衡。另外,良性和对抗性示例中的阶层内结构损失项进一步提高了模型的鲁棒性和效率。全面的实验结果表明,拟议的方法虽然简单,但绝大多数却超过了最先进的防御能力,在鲁棒性,训练效率以及良性示例中的性能方面表现出色。

Owing to security implications of adversarial vulnerability, adversarial robustness of deep metric learning models has to be improved. In order to avoid model collapse due to excessively hard examples, the existing defenses dismiss the min-max adversarial training, but instead learn from a weak adversary inefficiently. Conversely, we propose Hardness Manipulation to efficiently perturb the training triplet till a specified level of hardness for adversarial training, according to a harder benign triplet or a pseudo-hardness function. It is flexible since regular training and min-max adversarial training are its boundary cases. Besides, Gradual Adversary, a family of pseudo-hardness functions is proposed to gradually increase the specified hardness level during training for a better balance between performance and robustness. Additionally, an Intra-Class Structure loss term among benign and adversarial examples further improves model robustness and efficiency. Comprehensive experimental results suggest that the proposed method, although simple in its form, overwhelmingly outperforms the state-of-the-art defenses in terms of robustness, training efficiency, as well as performance on benign examples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源