论文标题
利用关系信息来学习弱分解的表示
Leveraging Relational Information for Learning Weakly Disentangled Representations
论文作者
论文摘要
解开是在神经表现中执行的困难财产。这可能部分归因于解散问题的形式化,该问题过于重点是将数据变异的相关因素分离出神经表示的单个孤立维度。我们认为,这样的定义可能过于限制,并且在下游任务方面不一定有益。在这项工作中,我们提出了关于学习(弱)删除表示的另一种观点,该观点利用了关系学习的概念。我们确定与生成因素的特定实例相对应的潜在空间的区域,并了解这些区域之间的关系,以便对潜在代码进行受控更改。我们还引入了一种化合物生成模型,该模型实现了这种薄弱的分解方法。我们的实验表明,学到的表示形式可以将数据中的变化相关因素分开,同时保留有效生成高质量数据样本所需的信息。
Disentanglement is a difficult property to enforce in neural representations. This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation. We argue that such a definition might be too restrictive and not necessarily beneficial in terms of downstream tasks. In this work, we present an alternative view over learning (weakly) disentangled representations, which leverages concepts from relational learning. We identify the regions of the latent space that correspond to specific instances of generative factors, and we learn the relationships among these regions in order to perform controlled changes to the latent codes. We also introduce a compound generative model that implements such a weak disentanglement approach. Our experiments shows that the learned representations can separate the relevant factors of variation in the data, while preserving the information needed for effectively generating high quality data samples.