论文标题
POA:针对功能扰动的概率鲁棒性评估
PRoA: A Probabilistic Robustness Assessment against Functional Perturbations
论文作者
论文摘要
在安全至关重要的深度学习应用中,鲁棒性测量是一个至关重要的前部阶段。但是,现有的鲁棒性验证方法对于在现实世界中部署机器学习系统不足以实用。一方面,这些方法试图声称没有扰动可以``傻瓜''深神经网络(DNNS),这在实践中可能太严格了。另一方面,现有作品严格考虑$ l_p $有界的添加剂扰动在像素空间上,尽管扰动(例如颜色变化和几何变换)在现实世界中更实际且经常发生。因此,从实际的角度来看,我们提出了一种基于适应性浓度的新颖和一般{\ IT概率的鲁棒性评估方法}(PROA),并且可以测量深度学习模型对功能扰动的鲁棒性。 PROA可以根据模型的概率鲁棒性提供统计保证,\ textit {i.e。},部署后训练有素的模型遇到的故障概率。我们的实验证明了PAA在评估对广泛功能扰动的概率鲁棒性方面的有效性和灵活性,并且与现有的最新基准相比,POA可以很好地扩展到各种大型深度神经网络。为了重现性,我们在github上发布工具:\ url {https://github.com/trustai/proa}。
In safety-critical deep learning applications robustness measurement is a vital pre-deployment phase. However, existing robustness verification methods are not sufficiently practical for deploying machine learning systems in the real world. On the one hand, these methods attempt to claim that no perturbations can ``fool'' deep neural networks (DNNs), which may be too stringent in practice. On the other hand, existing works rigorously consider $L_p$ bounded additive perturbations on the pixel space, although perturbations, such as colour shifting and geometric transformations, are more practically and frequently occurring in the real world. Thus, from the practical standpoint, we present a novel and general {\it probabilistic robustness assessment method} (PRoA) based on the adaptive concentration, and it can measure the robustness of deep learning models against functional perturbations. PRoA can provide statistical guarantees on the probabilistic robustness of a model, \textit{i.e.}, the probability of failure encountered by the trained model after deployment. Our experiments demonstrate the effectiveness and flexibility of PRoA in terms of evaluating the probabilistic robustness against a broad range of functional perturbations, and PRoA can scale well to various large-scale deep neural networks compared to existing state-of-the-art baselines. For the purpose of reproducibility, we release our tool on GitHub: \url{ https://github.com/TrustAI/PRoA}.