论文标题

在监督学习中有效地识别

Towards efficient representation identification in supervised learning

论文作者

Ahuja, Kartik, Mahajan, Divyat, Syrgkanis, Vasilis, Mitliagkas, Ioannis

论文摘要

人类具有将复杂的感觉输入(例如,图像,文本)分解为简单因素(例如形状,颜色)的显着能力。这种能力启发了许多试图解决以下问题的作品:我们如何将数据生成过程倒入以最小或没有监督的方式提取这些因素?关于非线性独立组件分析的文献中的几项作品已经确定了这一负面结果。如果不了解数据生成过程或适当的归纳偏见,就无法执行此反转。近年来,在结构假设下的分解方面已经取得了很多进展,例如,当我们可以访问辅助信息时,使变异因素在有条件地独立。但是,现有工作需要大量辅助信息,例如,在监督分类中,规定标签类的数量应至少等于所有变异因素的总维度。在这项工作中,我们偏离了这些假设,并提出:a)当辅助信息不提供有关变异因素的有条件独立性时,我们如何获得分解? b)我们可以减少分解所需的辅助信息量吗?对于一类辅助信息无法确保有条件独立性的模型,我们在理论上和实验上表明,即使辅助信息维度远小于真实的潜在表示的维度,也可能会出现解开(很大程度上)。

Humans have a remarkable ability to disentangle complex sensory inputs (e.g., image, text) into simple factors of variation (e.g., shape, color) without much supervision. This ability has inspired many works that attempt to solve the following question: how do we invert the data generation process to extract those factors with minimal or no supervision? Several works in the literature on non-linear independent component analysis have established this negative result; without some knowledge of the data generation process or appropriate inductive biases, it is impossible to perform this inversion. In recent years, a lot of progress has been made on disentanglement under structural assumptions, e.g., when we have access to auxiliary information that makes the factors of variation conditionally independent. However, existing work requires a lot of auxiliary information, e.g., in supervised classification, it prescribes that the number of label classes should be at least equal to the total dimension of all factors of variation. In this work, we depart from these assumptions and ask: a) How can we get disentanglement when the auxiliary information does not provide conditional independence over the factors of variation? b) Can we reduce the amount of auxiliary information required for disentanglement? For a class of models where auxiliary information does not ensure conditional independence, we show theoretically and experimentally that disentanglement (to a large extent) is possible even when the auxiliary information dimension is much less than the dimension of the true latent representation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源