论文标题

HEP的AI中可解释的不确定性量化

Interpretable Uncertainty Quantification in AI for HEP

论文作者

Chen, Thomas Y., Dey, Biprateep, Ghosh, Aishik, Kagan, Michael, Nord, Brian, Ramachandra, Nesar

论文摘要

估计不确定性是进行HEP中科学测量的核心:如果没有估计其不确定性,测量是无用的。不确定性量化(UQ)的目的是与一个问题密不可分的:“我们如何在身体和统计上解释这些不确定性?”这个问题的答案不仅取决于我们要执行的计算任务,还取决于我们用于该任务的方法。对于HEP中的人工智能(AI)应用,在几个领域中,可解释的UQ方法至关重要,包括推理,模拟和控制/决策。对于每个领域,都有一些方法,但尚未证明它们像当前在物理学中使用的更传统的方法一样值得信赖(例如,非AI常见者和贝叶斯方法)。 阐明上面的问题需要更多地了解AI系统的相互作用和不确定性量化。我们简要讨论了每个领域的现有方法,并将其与HEP跨越的任务相关联。然后,我们讨论了途径的建议,以开发必要的技术,以在接下来的十年中与UQ可靠地使用AI。

Estimating uncertainty is at the core of performing scientific measurements in HEP: a measurement is not useful without an estimate of its uncertainty. The goal of uncertainty quantification (UQ) is inextricably linked to the question, "how do we physically and statistically interpret these uncertainties?" The answer to this question depends not only on the computational task we aim to undertake, but also on the methods we use for that task. For artificial intelligence (AI) applications in HEP, there are several areas where interpretable methods for UQ are essential, including inference, simulation, and control/decision-making. There exist some methods for each of these areas, but they have not yet been demonstrated to be as trustworthy as more traditional approaches currently employed in physics (e.g., non-AI frequentist and Bayesian methods). Shedding light on the questions above requires additional understanding of the interplay of AI systems and uncertainty quantification. We briefly discuss the existing methods in each area and relate them to tasks across HEP. We then discuss recommendations for avenues to pursue to develop the necessary techniques for reliable widespread usage of AI with UQ over the next decade.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源