论文标题

学会学习压缩

Learning to Learn to Compress

论文作者

Zou, Nannan, Zhang, Honglei, Cricri, Francesco, Tavakoli, Hamed R., Lainema, Jani, Hannuksela, Miska, Aksu, Emre, Rahtu, Esa

论文摘要

在本文中,我们提出了一个用于图像压缩的端到端元学习系统。基于机器学习的传统方法,用于形象压缩训练一个或多个神经网络,以进行概括性能。但是,在推断时,可以针对每个测试图像优化编码器的编码器或潜在张量输出。该优化可以视为适应或仁慈的输入内容的一种形式。为了减少训练和推理条件之间的差距,我们为学习的图像压缩提出了一个新的训练范式,该范围基于元学习。在第一阶段,神经网络正常训练。在第二阶段中,模型不合时宜的元学习方法适用于图像压缩的特定情况,在该图像压缩的特定情况下,内环会执行潜在的张量过度拟合,并且外环更新基于过度拟合性能的编码器和解码器神经网络。此外,在元学习之后,我们建议在训练图像贴片上过度拟合并聚集解码器的偏差术语,以便在推理时可以在编码器侧选择最佳内容特定的偏置项。最后,我们为无损压缩提出了一个新的概率模型,该模型结合了多尺度和超分辨率概率模型方法的概念。我们通过精心设计的实验展示了我们所有提出的想法的好处。

In this paper we present an end-to-end meta-learned system for image compression. Traditional machine learning based approaches to image compression train one or more neural network for generalization performance. However, at inference time, the encoder or the latent tensor output by the encoder can be optimized for each test image. This optimization can be regarded as a form of adaptation or benevolent overfitting to the input content. In order to reduce the gap between training and inference conditions, we propose a new training paradigm for learned image compression, which is based on meta-learning. In a first phase, the neural networks are trained normally. In a second phase, the Model-Agnostic Meta-learning approach is adapted to the specific case of image compression, where the inner-loop performs latent tensor overfitting, and the outer loop updates both encoder and decoder neural networks based on the overfitting performance. Furthermore, after meta-learning, we propose to overfit and cluster the bias terms of the decoder on training image patches, so that at inference time the optimal content-specific bias terms can be selected at encoder-side. Finally, we propose a new probability model for lossless compression, which combines concepts from both multi-scale and super-resolution probability model approaches. We show the benefits of all our proposed ideas via carefully designed experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源