论文标题
学习为视频对象细分学习更好
Learning to Learn Better for Video Object Segmentation
论文作者
论文摘要
最近,联合学习框架(联合)集成了基于匹配的转导推理和在线归纳学习,以实现准确且健壮的半监督视频对象细分(SVO)。但是,使用掩码嵌入作为标签来指导两个分支中的目标特征的产生可能导致目标表示不足并降低性能。此外,如何合理地融合两个不同分支中的目标特征,而不是简单地将它们添加在一起以避免一个主体分支的不利影响。在本文中,我们提出了一个新颖的框架,该框架强调学习更好地学习(LLB)的SVO,称为LLB,在那里我们设计了判别标签生成模块(DLGM)和自适应融合模块以解决这些问题。从技术上讲,DLGM采用背景滤清器的框架,而不是目标掩码作为输入,并采用了轻巧的编码器来生成目标特征,该特征是在线少数射击学习者的标签,并且在变压器中解码器的价值以及指导两个分支来指导两个分支以学习更多区分目标表示。自适应融合模块为每个分支维护一个可学习的门,该模块可以重新获得元素的特征表示形式,并允许每个流向Fused目标特征的每个分支中的自适应目标信息,从而阻止一个分支占主导地位并使目标功能更强大,以使分散分散分散。对公共基准测试的广泛实验表明,我们提出的LLB方法可实现最先进的性能。
Recently, the joint learning framework (JOINT) integrates matching based transductive reasoning and online inductive learning to achieve accurate and robust semi-supervised video object segmentation (SVOS). However, using the mask embedding as the label to guide the generation of target features in the two branches may result in inadequate target representation and degrade the performance. Besides, how to reasonably fuse the target features in the two different branches rather than simply adding them together to avoid the adverse effect of one dominant branch has not been investigated. In this paper, we propose a novel framework that emphasizes Learning to Learn Better (LLB) target features for SVOS, termed LLB, where we design the discriminative label generation module (DLGM) and the adaptive fusion module to address these issues. Technically, the DLGM takes the background-filtered frame instead of the target mask as input and adopts a lightweight encoder to generate the target features, which serves as the label of the online few-shot learner and the value of the decoder in the transformer to guide the two branches to learn more discriminative target representation. The adaptive fusion module maintains a learnable gate for each branch, which reweighs the element-wise feature representation and allows an adaptive amount of target information in each branch flowing to the fused target feature, thus preventing one branch from being dominant and making the target feature more robust to distractor. Extensive experiments on public benchmarks show that our proposed LLB method achieves state-of-the-art performance.