论文标题

超越$ \ MATHCAL {H} $ - Divergence:Jensen-Shannon Divergence的域适应理论

Beyond $\mathcal{H}$-Divergence: Domain Adaptation Theory With Jensen-Shannon Divergence

论文作者

Shui, Changjian, Chen, Qi, Wen, Jun, Zhou, Fan, Gagné, Christian, Wang, Boyu

论文摘要

我们揭示了基于$ \ MATHCAL {H} $ - DIVERGENCE的广泛补习的经验领域对抗训练与普遍认为的理论对应物之间的不连贯性。具体而言,我们发现$ \ MATHCAL {H} $ - Divergence不等于Jensen-Shannon Divergence,这是域对抗训练的优化目标。为此,我们通过直接证明基于联合分布Jensen-Shannon Divergence的上层和下部目标风险范围来建立一个新的理论框架。我们进一步得出了边缘和条件偏移的双向上边界。我们的框架展示了不同转移学习问题的固有灵活性,这对于$ \ Mathcal {H} $ - 基于Divergence的理论无法适应的各种情况可用。从算法的角度来看,我们的理论实现了统一语义条件匹配,特征边缘匹配和标签边缘移位校正的通用准则。我们针对每个原则采用算法,并从经验上验证我们在实际数据集上的框架的好处。

We reveal the incoherence between the widely-adopted empirical domain adversarial training and its generally-assumed theoretical counterpart based on $\mathcal{H}$-divergence. Concretely, we find that $\mathcal{H}$-divergence is not equivalent to Jensen-Shannon divergence, the optimization objective in domain adversarial training. To this end, we establish a new theoretical framework by directly proving the upper and lower target risk bounds based on joint distributional Jensen-Shannon divergence. We further derive bi-directional upper bounds for marginal and conditional shifts. Our framework exhibits inherent flexibilities for different transfer learning problems, which is usable for various scenarios where $\mathcal{H}$-divergence-based theory fails to adapt. From an algorithmic perspective, our theory enables a generic guideline unifying principles of semantic conditional matching, feature marginal matching, and label marginal shift correction. We employ algorithms for each principle and empirically validate the benefits of our framework on real datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源