论文标题
学会为域概括生成新的领域
Learning to Generate Novel Domains for Domain Generalization
论文作者
论文摘要
本文着重于域的概括(DG),这是从多个源域学习的任务,该模型概括为看不见的域。 DG的主要挑战是,可用的源域通常表现出有限的多样性,从而阻碍了模型的学会概括能力。因此,我们采用数据生成器来合成从伪novel域的数据来增强源域。这明确增加了可用训练领域的多样性,并导致更具普遍的模型。为了训练发电机,我们使用最佳传输对源和合成的伪新颖域之间的分布差异进行建模,并最大化差异。为了确保将语义保存在合成数据中,我们进一步对发电机施加了周期矛盾和分类损失。我们的方法L2A-OT(通过最佳运输学习增强)在四个基准数据集上的当前最新DG方法的表现。
This paper focuses on domain generalization (DG), the task of learning from multiple source domains a model that generalizes well to unseen domains. A main challenge for DG is that the available source domains often exhibit limited diversity, hampering the model's ability to learn to generalize. We therefore employ a data generator to synthesize data from pseudo-novel domains to augment the source domains. This explicitly increases the diversity of available training domains and leads to a more generalizable model. To train the generator, we model the distribution divergence between source and synthesized pseudo-novel domains using optimal transport, and maximize the divergence. To ensure that semantics are preserved in the synthesized data, we further impose cycle-consistency and classification losses on the generator. Our method, L2A-OT (Learning to Augment by Optimal Transport) outperforms current state-of-the-art DG methods on four benchmark datasets.