论文标题

SW-VAE:通过潜在因素交换而弱监督的学习分解表示

SW-VAE: Weakly Supervised Learn Disentangled Representation Via Latent Factor Swapping

论文作者

Zhu, Jiageng, Xie, Hanchen, Abd-Almageed, Wael

论文摘要

表示解开是代表学习的重要目标,该目标受益于各种下游任务。为了实现这一目标,已经开发了许多无监督的学习表示方法。但是,事实证明,没有使用任何监督信号的培训过程就不足以进行分解表示。因此,我们提出了一种新型的弱监督训练方法,称为SW-VAE,该方法通过使用数据集的生成因子,将成对的输入观测值作为监督信号。此外,我们引入了策略,以逐渐增加训练过程中的学习难度以平滑训练过程。如几个数据集所示,我们的模型对表示解散任务的最新方法(SOTA)方法显示出显着改善。

Representation disentanglement is an important goal of representation learning that benefits various downstream tasks. To achieve this goal, many unsupervised learning representation disentanglement approaches have been developed. However, the training process without utilizing any supervision signal have been proved to be inadequate for disentanglement representation learning. Therefore, we propose a novel weakly-supervised training approach, named as SW-VAE, which incorporates pairs of input observations as supervision signals by using the generative factors of datasets. Furthermore, we introduce strategies to gradually increase the learning difficulty during training to smooth the training process. As shown on several datasets, our model shows significant improvement over state-of-the-art (SOTA) methods on representation disentanglement tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源