论文标题

注意力的关注:建模有效视频分类的上下文相关性

Attention in Attention: Modeling Context Correlation for Efficient Video Classification

论文作者

Hao, Yanbin, Wang, Shuo, Cao, Pei, Gao, Xinjian, Xu, Tong, Wu, Jinmeng, He, Xiangnan

论文摘要

由于透视环境的利用,注意机制显着提高了视频分类神经网络的性能。但是,当前对视频关注的研究通常着重于采用上下文的特定方面(例如,渠道,空间/时间或全球环境),以完善特征并在计算注意力时忽略其基本相关性。这会导致不完整的上下文利用,因此具有有限的性能改善的弱点。为了解决该问题,本文提出了一种有效的注意力(AIA)方法,用于元素特征精致,该方法调查了将渠道环境插入时空的注意力学习模块的可行性,称为Cinst,并将其反向变体称为Stinc。具体而言,我们将视频特征上下文实例化为沿特定轴的动力学,并具有全球平均值和最大池操作。 AIA模块的工作流程是,第一个注意块使用一种上下文信息来指导门控权重计算在另一个上下文中针对的第二个注意力。此外,所有注意力单位中的所有计算操作都在汇总维度上作用,这导致计算成本的增加很少($ <$ 0.02 \%)。为了验证我们的方法,我们将其密集地集成到两个经典的视频网络主机中,并在几个标准的视频分类基准上进行广泛的实验。我们AIA的源代码可在\ url {https://github.com/haoyanbin918/atterention-in-in-witchention}中获得。

Attention mechanisms have significantly boosted the performance of video classification neural networks thanks to the utilization of perspective contexts. However, the current research on video attention generally focuses on adopting a specific aspect of contexts (e.g., channel, spatial/temporal, or global context) to refine the features and neglects their underlying correlation when computing attentions. This leads to incomplete context utilization and hence bears the weakness of limited performance improvement. To tackle the problem, this paper proposes an efficient attention-in-attention (AIA) method for element-wise feature refinement, which investigates the feasibility of inserting the channel context into the spatio-temporal attention learning module, referred to as CinST, and also its reverse variant, referred to as STinC. Specifically, we instantiate the video feature contexts as dynamics aggregated along a specific axis with global average and max pooling operations. The workflow of an AIA module is that the first attention block uses one kind of context information to guide the gating weights calculation of the second attention that targets at the other context. Moreover, all the computational operations in attention units act on the pooled dimension, which results in quite few computational cost increase ($<$0.02\%). To verify our method, we densely integrate it into two classical video network backbones and conduct extensive experiments on several standard video classification benchmarks. The source code of our AIA is available at \url{https://github.com/haoyanbin918/Attention-in-Attention}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源