论文标题

简单神经网络中的可达性

Reachability In Simple Neural Networks

论文作者

Sälzer, Marco, Lange, Martin

论文摘要

我们研究了(深)神经网络的可及性问题的复杂性:它是否计算出有效输入的有效输出?最近,人们声称,对于线性不等式的结合给出的输入/输出维度,该问题对于一般神经网络和规格而言是NP的完整。我们概括了证明并修复原始上和下限证明中的一些缺陷。在总体结果的激励下,我们表明NP硬度已经适用于限制的简单规范和神经网络。允许仅一个隐藏层和一个仅一个负面,零和一个正权重或偏置的神经网络的输出维度以及一个偏见。此外,我们为有关神经网络验证的这一研究方向进行了详尽的讨论和可能的扩展。

We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and specifications over the input/output dimension given by conjunctions of linear inequalities. We recapitulate the proof and repair some flaws in the original upper and lower bound proofs. Motivated by the general result, we show that NP-hardness already holds for restricted classes of simple specifications and neural networks. Allowing for a single hidden layer and an output dimension of one as well as neural networks with just one negative, zero and one positive weight or bias is sufficient to ensure NP-hardness. Additionally, we give a thorough discussion and outlook of possible extensions for this direction of research on neural network verification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源