论文标题
最大程度地减少对神经网络的最坏情况的侵犯
Minimizing Worst-Case Violations of Neural Networks
论文作者
论文摘要
机器学习(ML)算法在近似复杂的非线性关系方面非常出色。但是,大多数ML培训过程旨在提供具有良好平均性能的ML工具,但不能保证其最差的估计错误。对于诸如电力系统之类的安全系统,这是其采用的主要障碍。到目前为止,方法可以确定仅对受过训练的ML算法的最严重违规行为。据我们所知,这是第一篇引入神经网络培训程序的论文,旨在达到良好的平均表现和最低最严重的违规行为。使用最佳功率流(OPF)问题作为指导应用程序,我们的方法(i)引入了一个框架,该框架在训练过程中减少了最严重的生成约束违规行为,并将其纳入了可区分的优化层; (ii)提出了神经网络顺序学习体系结构,以显着加速它。我们在四个不同的测试系统上展示了拟议的体系结构,范围从39辆总线到162辆公交车,用于AC-OPF和DC-OPF应用。
Machine learning (ML) algorithms are remarkably good at approximating complex non-linear relationships. Most ML training processes, however, are designed to deliver ML tools with good average performance, but do not offer any guarantees about their worst-case estimation error. For safety-critical systems such as power systems, this places a major barrier for their adoption. So far, approaches could determine the worst-case violations of only trained ML algorithms. To the best of our knowledge, this is the first paper to introduce a neural network training procedure designed to achieve both a good average performance and minimum worst-case violations. Using the Optimal Power Flow (OPF) problem as a guiding application, our approach (i) introduces a framework that reduces the worst-case generation constraint violations during training, incorporating them as a differentiable optimization layer; and (ii) presents a neural network sequential learning architecture to significantly accelerate it. We demonstrate the proposed architecture on four different test systems ranging from 39 buses to 162 buses, for both AC-OPF and DC-OPF applications.