论文标题
元学习稀疏压缩网络
Meta-Learning Sparse Compression Networks
论文作者
论文摘要
深度学习中的最新工作重新想象了数据的表示,因为函数从坐标空间映射到潜在的连续信号。当神经网络近似此类功能时,这引入了更常见的多维阵列表示的引人注目的替代方案。关于这种隐式神经表示(INR)的最新工作表明,仔细体系结构搜索 - INR可以超越建立的压缩方法,例如JPEG(例如Dupont等,2021)。在本文中,我们提出了至关重要的步骤,以使这样的想法可扩展:首先,我们采用最新的网络稀疏技术来大大改善压缩。其次,引入第一种方法,允许在常用的元学习算法的内环中使用稀疏,从而极大地改善了压缩和学习INR的计算成本。这种形式主义的一般性使我们能够对各种数据模式(例如图像,歧管,签名距离功能,3D形状和场景)提出结果,其中一些建立了新的最新结果。
Recent work in Deep Learning has re-imagined the representation of data as functions mapping from a coordinate space to an underlying continuous signal. When such functions are approximated by neural networks this introduces a compelling alternative to the more common multi-dimensional array representation. Recent work on such Implicit Neural Representations (INRs) has shown that - following careful architecture search - INRs can outperform established compression methods such as JPEG (e.g. Dupont et al., 2021). In this paper, we propose crucial steps towards making such ideas scalable: Firstly, we employ state-of-the-art network sparsification techniques to drastically improve compression. Secondly, introduce the first method allowing for sparsification to be employed in the inner-loop of commonly used Meta-Learning algorithms, drastically improving both compression and the computational cost of learning INRs. The generality of this formalism allows us to present results on diverse data modalities such as images, manifolds, signed distance functions, 3D shapes and scenes, several of which establish new state-of-the-art results.