论文标题
空间:通过空间注意力和分解而无监督的面向对象的场景表示
SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition
论文作者
论文摘要
将复杂的多对象场景分解为有意义的抽象(例如对象)的能力对于获得更高级别的认知至关重要。以前无监督的面向对象的场景表示学习的方法是基于空间注意事项或场景混合方法,并且可伸缩性有限,这是建模现实世界场景的主要障碍。在本文中,我们提出了一种称为Space的生成潜在变量模型,该模型提供了一个统一的概率建模框架,该框架结合了空间注意事项和场景混合方法。空间可以显式地为前景对象提供分解的对象表示,同时也分解了复杂形态的背景段。以前的模型都很好,但并非两者兼而有之。空间还通过合并并行的空间意见来解决以前方法的可伸缩性问题,因此适用于具有大量对象而没有性能降解的场景。我们通过对Atari和3D房间的实验来展示,与Spair,碘和创世纪相比,空间始终如一地达到上述特性。我们的实验结果可以在我们的项目网站上找到:https://sites.google.com/view/space-project-page-page
The ability to decompose complex multi-object scenes into meaningful abstractions like objects is fundamental to achieve higher-level cognition. Previous approaches for unsupervised object-oriented scene representation learning are either based on spatial-attention or scene-mixture approaches and limited in scalability which is a main obstacle towards modeling real-world scenes. In this paper, we propose a generative latent variable model, called SPACE, that provides a unified probabilistic modeling framework that combines the best of spatial-attention and scene-mixture approaches. SPACE can explicitly provide factorized object representations for foreground objects while also decomposing background segments of complex morphology. Previous models are good at either of these, but not both. SPACE also resolves the scalability problems of previous methods by incorporating parallel spatial-attention and thus is applicable to scenes with a large number of objects without performance degradations. We show through experiments on Atari and 3D-Rooms that SPACE achieves the above properties consistently in comparison to SPAIR, IODINE, and GENESIS. Results of our experiments can be found on our project website: https://sites.google.com/view/space-project-page