论文标题

在基于对抗场景的机器人的安全测试中:可比性和最佳侵略性

On the Adversarial Scenario-based Safety Testing of Robots: the Comparability and Optimal Aggressiveness

论文作者

Weng, Bowen, Castillo, Guillermo A., Zhang, Wei, Hereid, Ayonga

论文摘要

本文研究了黑盒安全测试配置中基于方案的安全测试算法。对于具有不同采样分布的算法共享相同的州行动集覆盖范围,通常认为优先考虑探索高风险状态侵害会导致更好的采样效率。我们的提案通过引入不可能的定理来质疑上述直觉,该定理可证明显示上述差异的所有安全测试算法,同样具有相同的预期采样效率。此外,对于涵盖不同状态行为集的测试算法,不再适用于采样效率标准,因为不同的算法不一定会收敛到相同的终止条件。然后,我们提出了基于几乎安全集合概念的测试攻击性定义,以及一种无偏和有效的算法,比较了测试算法之间的侵略性。还提出了来自两足球运动控制器和车辆决策模块的安全性测试的经验观察,以支持提出的理论意义和方法论。

This paper studies the class of scenario-based safety testing algorithms in the black-box safety testing configuration. For algorithms sharing the same state-action set coverage with different sampling distributions, it is commonly believed that prioritizing the exploration of high-risk state-actions leads to a better sampling efficiency. Our proposal disputes the above intuition by introducing an impossibility theorem that provably shows all safety testing algorithms of the aforementioned difference perform equally well with the same expected sampling efficiency. Moreover, for testing algorithms covering different sets of state-actions, the sampling efficiency criterion is no longer applicable as different algorithms do not necessarily converge to the same termination condition. We then propose a testing aggressiveness definition based on the almost safe set concept along with an unbiased and efficient algorithm that compares the aggressiveness between testing algorithms. Empirical observations from the safety testing of bipedal locomotion controllers and vehicle decision-making modules are also presented to support the proposed theoretical implications and methodologies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源