论文标题
学习各种表示的代表,以快速适应分配变化
Learning Diverse Representations for Fast Adaptation to Distribution Shift
论文作者
论文摘要
I.I.D.假设是一个有用的理想化,它为监督机器学习的许多成功方法提供了基础。但是,它的违规行为可能导致模型学会利用培训数据中的虚假相关性,使它们容易受到对抗干预的影响,破坏其可靠性并限制其实际应用。为了减轻此问题,我们提出了一种学习多个模型的方法,并结合了一个目标,该目标压力每个模型以学习解决任务的独特方法。我们提出了一个多样性的概念,基于最小化给定标签模型的最终层表示的条件总相关性,我们使用变分估计器将其近似,并使用对抗训练最小化。为了证明我们的框架促进快速适应分配变化的能力,我们使用少量数据从移位分布中训练了模型的冷冻输出的许多简单分类器。在此评估方案下,我们的框架大大优于使用经验风险最小化原则训练的基线。
The i.i.d. assumption is a useful idealization that underpins many successful approaches to supervised machine learning. However, its violation can lead to models that learn to exploit spurious correlations in the training data, rendering them vulnerable to adversarial interventions, undermining their reliability, and limiting their practical application. To mitigate this problem, we present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task. We propose a notion of diversity based on minimizing the conditional total correlation of final layer representations across models given the label, which we approximate using a variational estimator and minimize using adversarial training. To demonstrate our framework's ability to facilitate rapid adaptation to distribution shift, we train a number of simple classifiers from scratch on the frozen outputs of our models using a small amount of data from the shifted distribution. Under this evaluation protocol, our framework significantly outperforms a baseline trained using the empirical risk minimization principle.