论文标题

E-ADDA:无监督的对抗域的适应性,通过新的Mahalanobis距离距离智能计算增强

E-ADDA: Unsupervised Adversarial Domain Adaptation Enhanced by a New Mahalanobis Distance Loss for Smart Computing

论文作者

Gao, Ye, Baucom, Brian, Rose, Karen, Gordon, Kristina, Wang, Hongning, Stankovic, John

论文摘要

在智能计算中,针对特定任务的培训样品标签并不总是丰富。但是,可以使用相关但不同的数据集中的样品标签。结果,研究人员依靠无监督的域适应来利用数据集(源域)中的标签来在不同的,未标记的数据集(目标域)中执行更好的分类。 UDA现有的非生成对抗解决方案旨在通过对抗训练来实现领域混乱。理想的情况是实现了完美的领域混乱,但这不能保证是真实的。为了进一步在对抗训练的基础上强制实施域混淆,我们提出了一种新颖的UDA算法,\ textit {e-adda},它既采用了玛哈拉诺省距离损失的新颖变化,又采用了分布过度的检测下的差异。 Mahalanobis距离损失最大程度地减少了编码目标样品与源域分布之间的分布距离,从而在对抗性训练的顶部强制实施了其他域混淆。然后,OOD子例程进一步消除了域混淆不成功的样品。我们在声学和计算机视觉方式中对E-ADDA进行了广泛而全面的评估。在声学方式中,E-ADDA的表现优于几种最新的UDA算法,高达29.8%,以F1分数测量。在计算机视觉方式中,评估结果表明,我们在流行的UDA基准(例如Office-31和Office Home)上实现了新的最先进的性能,表现优于第二大表现算法高达17.9%。

In smart computing, the labels of training samples for a specific task are not always abundant. However, the labels of samples in a relevant but different dataset are available. As a result, researchers have relied on unsupervised domain adaptation to leverage the labels in a dataset (the source domain) to perform better classification in a different, unlabeled dataset (target domain). Existing non-generative adversarial solutions for UDA aim at achieving domain confusion through adversarial training. The ideal scenario is that perfect domain confusion is achieved, but this is not guaranteed to be true. To further enforce domain confusion on top of the adversarial training, we propose a novel UDA algorithm, \textit{E-ADDA}, which uses both a novel variation of the Mahalanobis distance loss and an out-of-distribution detection subroutine. The Mahalanobis distance loss minimizes the distribution-wise distance between the encoded target samples and the distribution of the source domain, thus enforcing additional domain confusion on top of adversarial training. Then, the OOD subroutine further eliminates samples on which the domain confusion is unsuccessful. We have performed extensive and comprehensive evaluations of E-ADDA in the acoustic and computer vision modalities. In the acoustic modality, E-ADDA outperforms several state-of-the-art UDA algorithms by up to 29.8%, measured in the f1 score. In the computer vision modality, the evaluation results suggest that we achieve new state-of-the-art performance on popular UDA benchmarks such as Office-31 and Office-Home, outperforming the second best-performing algorithms by up to 17.9%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源