论文标题

用于低 - 建立视频理解的编码框架和基准测试

A Coding Framework and Benchmark towards Low-Bitrate Video Understanding

论文作者

Tian, Yuan, Lu, Guo, Yan, Yichao, Zhai, Guangtao, Chen, Li, Gao, Zhiyong

论文摘要

视频压缩对于大多数视频分析系统都是必不可少的。尽管节省了运输带宽,但它也会使下游视频理解任务恶化,尤其是在低二颗粒设置下。为了系统地调查这个问题,我们首先彻底回顾了先前的方法,揭示了三个原则,即任务策划,无标签和数据出现的语义先验,对于机器友好的编码框架至关重要,但到目前为止尚未完全满意。在本文中,我们提出了一个传统的神经混合编码框架,该框架通过利用传统的编解码器和神经网络(NNS)来同时符合所有这些原则。一方面,传统的编解码器可以有效地编码视频的像素信号,但可能会扭曲语义信息。另一方面,高度非线性的NNS精通将视频语义凝结成紧凑的表示。通过确保保留视频的运输高效语义表示,可以优化该框架。编码过程是自发地从未标记的数据中自发学习的。从两个流(编解码器和NN)进行协同解码的视频具有丰富的语义,并且在视觉上是现实的,从经验上可以提高下游下游视频分析任务性能,而无需任何适应后的过程。此外,通过引入注意机制和自适应建模方案,我们的方法的视频语义建模能力进一步增强了。最后,我们在八个数据集上使用三个下游任务构建了一个低构酸视频理解基准测试,这证明了我们方法的显着优势。所有代码,数据和模型都将在\ url {https://github.com/tianyuan168326/vcs-pytorch}中获得。

Video compression is indispensable to most video analysis systems. Despite saving transportation bandwidth, it also deteriorates downstream video understanding tasks, especially at low-bitrate settings. To systematically investigate this problem, we first thoroughly review the previous methods, revealing that three principles, i.e., task-decoupled, label-free, and data-emerged semantic prior, are critical to a machine-friendly coding framework but are not fully satisfied so far. In this paper, we propose a traditional-neural mixed coding framework that simultaneously fulfills all these principles, by taking advantage of both traditional codecs and neural networks (NNs). On one hand, the traditional codecs can efficiently encode the pixel signal of videos but may distort the semantic information. On the other hand, highly non-linear NNs are proficient in condensing video semantics into a compact representation. The framework is optimized by ensuring that a transportation-efficient semantic representation of the video is preserved w.r.t. the coding procedure, which is spontaneously learned from unlabeled data in a self-supervised manner. The videos collaboratively decoded from two streams (codec and NN) are of rich semantics, as well as visually photo-realistic, empirically boosting several mainstream downstream video analysis task performances without any post-adaptation procedure. Furthermore, by introducing the attention mechanism and adaptive modeling scheme, the video semantic modeling ability of our approach is further enhanced. Finally, we build a low-bitrate video understanding benchmark with three downstream tasks on eight datasets, demonstrating the notable superiority of our approach. All codes, data, and models will be available at \url{https://github.com/tianyuan168326/VCS-Pytorch}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源