论文标题

使用Lipschitz边界训练强大的神经网络

Training robust neural networks using Lipschitz bounds

论文作者

Pauli, Patricia, Koch, Anne, Berberich, Julian, Kohler, Paul, Allgöwer, Frank

论文摘要

由于它们对对抗性扰动的敏感性,神经网络(NNS)几乎不用于安全至关重要的应用中。对输入中这种扰动的鲁棒性的一种度量是NN定义的输入输出图的Lipschitz常数。在这项工作中,我们提出了一个框架来训练多层NNS,同时通过保持lipschitz的恒定量不断增加,从而鼓励健壮性,从而解决了鲁棒性问题。更具体地说,我们根据乘数的交替方向方法设计了一种优化方案,该方案不仅可以最大程度地减少NN的训练损失,还可以最大程度地减少其Lipschitz常数,从而导致基于半决赛的基于半决赛的训练程序,从而促进稳健性。我们设计了此培训程序的两个版本。第一个包括一个正规器,该正规器会惩罚Lipschitz常数上的准确上限。第二个允许在训练期间始终在NN上执行所需的Lipschitz。最后,我们提供了两个示例,以表明所提出的框架成功地提高了NNS的鲁棒性。

Due to their susceptibility to adversarial perturbations, neural networks (NNs) are hardly used in safety-critical applications. One measure of robustness to such perturbations in the input is the Lipschitz constant of the input-output map defined by an NN. In this work, we propose a framework to train multi-layer NNs while at the same time encouraging robustness by keeping their Lipschitz constant small, thus addressing the robustness issue. More specifically, we design an optimization scheme based on the Alternating Direction Method of Multipliers that minimizes not only the training loss of an NN but also its Lipschitz constant resulting in a semidefinite programming based training procedure that promotes robustness. We design two versions of this training procedure. The first one includes a regularizer that penalizes an accurate upper bound on the Lipschitz constant. The second one allows to enforce a desired Lipschitz bound on the NN at all times during training. Finally, we provide two examples to show that the proposed framework successfully increases the robustness of NNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源