论文标题

有条件耦合的生成对抗网络,用于零射击域的适应

Conditional Coupled Generative Adversarial Networks for Zero-Shot Domain Adaptation

论文作者

Wang, Jinghua, Jiang, Jianmin

论文摘要

由于域的存在,在一个领域中训练的机器学习模型在其他领域的表现较差。域的适应技术通过训练从标签富源的源域到标签 - 筛分目标域的可转移模型来解决此问题。不幸的是,大多数现有领域的适应技术都取决于目标域数据的可用性,从而将其应用程序限制在少数计算机视觉问题的小社区中。在本文中,我们解决了具有挑战性的零照片适应性(ZSDA)问题,其中目标域数据在训练阶段不可用。为此,我们通过将耦合的生成对抗网络(COGAN)扩展到条件模型,将条件耦合的生成对抗网络(Cocogan)提出。与现有的艺术状态相比,我们提出的Cocogan能够在两个不同的任务中捕获双域样本的联合分布,即相关任务(RT)和一个无关紧要的任务(IRT)。我们在RT中使用源域样品和IRT中的双域样品训练Cocogan,以完成域的适应性。尽管前者提供了不可用的目标域数据的高级概念,但后者携带了RT和IRT中两个域之间的共享相关性。在没有RT的目标域数据的情况下训练Cocogan,我们提出了一个新的监督信号,即跨任务的表示之间的对齐方式。进行了广泛的实验表明,我们提出的可可人体在图像分类中的表现优于现有的艺术状态。

Machine learning models trained in one domain perform poorly in the other domains due to the existence of domain shift. Domain adaptation techniques solve this problem by training transferable models from the label-rich source domain to the label-scarce target domain. Unfortunately, a majority of the existing domain adaptation techniques rely on the availability of target-domain data, and thus limit their applications to a small community across few computer vision problems. In this paper, we tackle the challenging zero-shot domain adaptation (ZSDA) problem, where target-domain data is non-available in the training stage. For this purpose, we propose conditional coupled generative adversarial networks (CoCoGAN) by extending the coupled generative adversarial networks (CoGAN) into a conditioning model. Compared with the existing state of the arts, our proposed CoCoGAN is able to capture the joint distribution of dual-domain samples in two different tasks, i.e. the relevant task (RT) and an irrelevant task (IRT). We train CoCoGAN with both source-domain samples in RT and dual-domain samples in IRT to complete the domain adaptation. While the former provide high-level concepts of the non-available target-domain data, the latter carry the sharing correlation between the two domains in RT and IRT. To train CoCoGAN in the absence of target-domain data for RT, we propose a new supervisory signal, i.e. the alignment between representations across tasks. Extensive experiments carried out demonstrate that our proposed CoCoGAN outperforms existing state of the arts in image classifications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源