论文标题
分布稳健的域适应
Distributionally Robust Domain Adaptation
论文作者
论文摘要
域的适应性(DA)最近受到了极大的关注,因为它有可能使跨源和目标域的学习模型具有不匹配的分布。由于DA方法仅依赖于给定的源和目标域样本,因此它们通常会产生容易受到噪声且无法适应目标域的看不见的样本的模型,该模型要求使用DA方法来保证学习模型的稳健性和概括。在本文中,我们提出了DRDA,这是一种具有分布鲁棒的域适应方法。 DRDA利用分布强大的优化(DRO)框架来学习强大的决策功能,从而最大程度地降低了最差的目标域风险,并通过从给定标记的源域样本中传输知识,从目标域中概括到任何样本中。我们利用最大平均差异(MMD)度量来构建一组模棱两可的分布集,这些分布可证明包含具有很高概率的源域和目标域分布。因此,该风险显示为上限样本外目标域损失。我们的实验结果表明,我们的配方表现优于现有的强大学习方法。
Domain Adaptation (DA) has recently received significant attention due to its potential to adapt a learning model across source and target domains with mismatched distributions. Since DA methods rely exclusively on the given source and target domain samples, they generally yield models that are vulnerable to noise and unable to adapt to unseen samples from the target domain, which calls for DA methods that guarantee the robustness and generalization of the learned models. In this paper, we propose DRDA, a distributionally robust domain adaptation method. DRDA leverages a distributionally robust optimization (DRO) framework to learn a robust decision function that minimizes the worst-case target domain risk and generalizes to any sample from the target domain by transferring knowledge from a given labeled source domain sample. We utilize the Maximum Mean Discrepancy (MMD) metric to construct an ambiguity set of distributions that provably contains the source and target domain distributions with high probability. Hence, the risk is shown to upper bound the out-of-sample target domain loss. Our experimental results demonstrate that our formulation outperforms existing robust learning approaches.