论文标题

减少冗余双胞胎网络:多输出情绪回归的培训框架

Redundancy Reduction Twins Network: A Training framework for Multi-output Emotion Regression

论文作者

Jing, Xin, Song, Meishu, Triantafyllopoulos, Andreas, Yang, Zijiang, Schuller, Björn W.

论文摘要

在本文中,我们提出了冗余双胞胎网络(RRTN),这是一个冗余降低训练框架,通过测量与样品扭曲版本的同一网络的输出之间的互相关矩阵来最大程度地减少冗余,并将其作为可能的身份矩阵接近,并尽可能地将其带入。 RRTN还采用了新的损失函数,即Barlow Twins损失函数,以帮助最大化从样本的不同扭曲版本获得的表示的相似性。但是,由于损失的分布会导致网络中的性能波动,因此我们还建议使用约束的不确定性减肥(RUWL)或联合训练来确定损失功能的最佳权重。我们在CNN14上使用的最佳方法在EXVO多任务开发套件上获得了CCC,而CCC的情绪回归为0.678,比Vanilla CNN 14 CCC增加了4.8%,为0.647,这在95%的置信区间(2奈特)在95%的置信区间上取得了显着差异。

In this paper, we propose the Redundancy Reduction Twins Network (RRTN), a redundancy reduction training framework that minimizes redundancy by measuring the cross-correlation matrix between the outputs of the same network fed with distorted versions of a sample and bringing it as close to the identity matrix as possible. RRTN also applies a new loss function, the Barlow Twins loss function, to help maximize the similarity of representations obtained from different distorted versions of a sample. However, as the distribution of losses can cause performance fluctuations in the network, we also propose the use of a Restrained Uncertainty Weight Loss (RUWL) or joint training to identify the best weights for the loss function. Our best approach on CNN14 with the proposed methodology obtains a CCC over emotion regression of 0.678 on the ExVo Multi-task dev set, a 4.8% increase over a vanilla CNN 14 CCC of 0.647, which achieves a significant difference at the 95% confidence interval (2-tailed).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源