论文标题

与生成的最近邻居的域概括中的差异最小化

Discrepancy Minimization in Domain Generalization with Generative Nearest Neighbors

论文作者

Pandey, Prashant, Raman, Mrigank, Varambally, Sumanth, AP, Prathosh

论文摘要

域的概括(DG)处理了域移位问题,在该域上,在多个源域上训练的机器学习模型无法很好地概括具有不同统计数据的目标域。已经提出了多种方法来通过学习域的不变域的不变域来解决域概括的问题,这些源域无法保证对移位的目标域上的概括。我们提出了一种生成的基于邻域的差异最小化(GNNDM)方法,该方法提供了一种理论保证,该保证在目标标签过程中受误差的上限。我们采用域差异最小化网络(DDMN),该网络学习域不可知特征以产生单个源域,同时保留数据点的类标签。从该源域中提取的功能是使用生成模型来学习的,该模型将其潜在空间用作采样器,以检索目标数据点的最近邻居。所提出的方法不需要访问域标签(更现实的场景),而不是现有方法。从经验上讲,我们在两个数据集上显示了方法的功效:PAC和VLC。通过广泛的实验,我们证明了提出的方法的有效性,该方法的表现优于几种最先进的DG方法。

Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics. Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain. We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target. We employ a Domain Discrepancy Minimization Network (DDMN) that learns domain agnostic features to produce a single source domain while preserving the class labels of the data points. Features extracted from this source domain are learned using a generative model whose latent space is used as a sampler to retrieve the nearest neighbors for the target data points. The proposed method does not require access to the domain labels (a more realistic scenario) as opposed to the existing approaches. Empirically, we show the efficacy of our method on two datasets: PACS and VLCS. Through extensive experimentation, we demonstrate the effectiveness of the proposed method that outperforms several state-of-the-art DG methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源