论文标题

变压器遇到边界价值反问题

Transformer Meets Boundary Value Inverse Problems

论文作者

Guo, Ruchi, Cao, Shuhao, Chen, Long

论文摘要

提出了一种基于变压器的深度直接采样方法,用于电阻抗断层扫描,这是一种众所周知的严重不良的非线性边界价值反问题。通过评估精心设计的数据和重建图像之间学习的逆运算符来实现实时重建。努力为一个基本问题提供一个具体的例子:一个人是否以及如何从数学问题的理论结构中受益,以开发面向任务和结构结构的深层神经网络?具体而言,灵感受到反问题的直接采样方法的启发,不同频率中的1D边界数据由基于部分微分方程的特征映射进行预处理,以产生2D谐波扩展作为不同的输入通道。然后,通过引入可学习的非本地核,直接采样是对修改的注意机制的重塑。这种新方法比其前任和当代操作员学习者提高了精度,并表现出对基准中噪音的鲁棒性。这项研究将加强以下见解:尽管是针对自然语言处理任务发明的,但注意力机制为与先验数学知识的一致性所修改提供了极大的灵活性,这最终导致了更具物理兼容的神经建筑的设计。

A Transformer-based deep direct sampling method is proposed for electrical impedance tomography, a well-known severely ill-posed nonlinear boundary value inverse problem. A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and the reconstructed images. An effort is made to give a specific example to a fundamental question: whether and how one can benefit from the theoretical structure of a mathematical problem to develop task-oriented and structure-conforming deep neural networks? Specifically, inspired by direct sampling methods for inverse problems, the 1D boundary data in different frequencies are preprocessed by a partial differential equation-based feature map to yield 2D harmonic extensions as different input channels. Then, by introducing learnable non-local kernels, the direct sampling is recast to a modified attention mechanism. The new method achieves superior accuracy over its predecessors and contemporary operator learners and shows robustness to noises in benchmarks. This research shall strengthen the insights that, despite being invented for natural language processing tasks, the attention mechanism offers great flexibility to be modified in conformity with the a priori mathematical knowledge, which ultimately leads to the design of more physics-compatible neural architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源