论文标题
学习域不变图,用于自适应半监督域的适应性,很少有标记的源样本
Learning Domain-invariant Graph for Adaptive Semi-supervised Domain Adaptation with Few Labeled Source Samples
论文作者
论文摘要
域的适应性旨在从源域中概括一个模型,以解决相关但不同的目标域中的任务。传统的域自适应算法假定,在源域中可以使用足够的标记数据,这些数据被视为先验知识。但是,当源域中只有少数标记的数据时,这些算法将是不可行的,因此性能大大降低。为了应对这一挑战,我们提出了一种仅使用少数标记的源样本的域适应性域的域不变图学习方法(DGL)方法。首先,DGL引入了NyStrom方法,该方法构建具有与目标域相似的几何特性的塑料图。然后,DGL灵活地采用了NyStrom近似误差来测量塑料图和源图之间的差异,从几何学角度从几何角度正式化了分布不匹配。通过最小化近似误差,DGL学习了一个域不变的几何图,以桥梁源和目标域。最后,我们将学习的域不变图与半监督的学习集成在一起,并进一步提出了一个自适应的半监督模型来处理跨域问题。流行数据集上的广泛实验的结果验证了DGL的优势,尤其是在只有几个标记的源样本可用时。
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain. Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain. However, these algorithms will be infeasible when only a few labeled data exist in the source domain, and thus the performance decreases significantly. To address this challenge, we propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples. Firstly, DGL introduces the Nystrom method to construct a plastic graph that shares similar geometric property as the target domain. And then, DGL flexibly employs the Nystrom approximation error to measure the divergence between plastic graph and source graph to formalize the distribution mismatch from the geometric perspective. Through minimizing the approximation error, DGL learns a domain-invariant geometric graph to bridge source and target domains. Finally, we integrate the learned domain-invariant graph with the semi-supervised learning and further propose an adaptive semi-supervised model to handle the cross-domain problems. The results of extensive experiments on popular datasets verify the superiority of DGL, especially when only a few labeled source samples are available.