论文标题

模棱两可的网络注意网络

Equivariant Mesh Attention Networks

论文作者

Basu, Sourya, Gallego-Posada, Jose, Viganò, Francesco, Rowbottom, James, Cohen, Taco

论文摘要

事实证明,与对称性的对称性在深度学习研究中是一种强大的归纳偏见。关于网格加工的最新著作集中在各种天然对称性上,包括翻译,旋转,缩放,节点排列和仪表变换。迄今为止,没有现有的体系结构与所有这些转换都具有等效性。在本文中,我们为网格数据提供了基于注意力的架构,该体系结构与上面提到的所有转换相似。我们的管道依赖于相对切向特征的使用:一种简单,有效,对等效性的替代品作为输入。关于浮士德和TOSCA数据集的实验证实,我们提出的架构在这些基准测试方面的性能提高了,并且确实是对多种局部/全球转换的等效性,因此是强大的。

Equivariance to symmetries has proven to be a powerful inductive bias in deep learning research. Recent works on mesh processing have concentrated on various kinds of natural symmetries, including translations, rotations, scaling, node permutations, and gauge transformations. To date, no existing architecture is equivariant to all of these transformations. In this paper, we present an attention-based architecture for mesh data that is provably equivariant to all transformations mentioned above. Our pipeline relies on the use of relative tangential features: a simple, effective, equivariance-friendly alternative to raw node positions as inputs. Experiments on the FAUST and TOSCA datasets confirm that our proposed architecture achieves improved performance on these benchmarks and is indeed equivariant, and therefore robust, to a wide variety of local/global transformations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源