论文标题
评估生成模型中的分离,而无需了解潜在因素
Evaluating Disentanglement in Generative Models Without Knowledge of Latent Factors
论文作者
论文摘要
概率生成模型为学习数据的基本几何形状提供了一个灵活而系统的框架。但是,在此环境中的模型选择具有挑战性,尤其是在选择不确定的质量(例如脱离或解释性)时。在这项工作中,我们通过引入一种基于学习过程中展示的训练动态的生成模型的方法来解决这一差距。受到脱节的最新理论特征的启发,我们的方法不需要对潜在的潜在因素进行监督。我们通过证明不需要标签\ textemdash的基本生成因素的分离指标来评估我们的方法。我们还表明,我们的方法与基线监督方法的评估方法相关。最后,我们表明我们的方法可以用作无监督的指标,以实现强化学习和公平分类问题的下游性能。
Probabilistic generative models provide a flexible and systematic framework for learning the underlying geometry of data. However, model selection in this setting is challenging, particularly when selecting for ill-defined qualities such as disentanglement or interpretability. In this work, we address this gap by introducing a method for ranking generative models based on the training dynamics exhibited during learning. Inspired by recent theoretical characterizations of disentanglement, our method does not require supervision of the underlying latent factors. We evaluate our approach by demonstrating the need for disentanglement metrics which do not require labels\textemdash the underlying generative factors. We additionally demonstrate that our approach correlates with baseline supervised methods for evaluating disentanglement. Finally, we show that our method can be used as an unsupervised indicator for downstream performance on reinforcement learning and fairness-classification problems.