论文标题
神经网络方法指向晶格解码
Neural network approaches to point lattice decoding
论文作者
论文摘要
我们从神经网络的角度表征了晶格解码问题的复杂性。引入了沃诺诺(Voronoi)的基础概念,以将解决方案的空间限制为二元组。一方面,该问题表明等于计算仅限于基本并行托管的连续分段线性(CPWL)函数。另一方面,众所周知,由Relu Feed-Forward神经网络计算出的任何功能都是CPWL。结果,我们计算CPWL解码功能中的仿射片数以表征解码问题的复杂性。它在空间维度$ n $中是指数级的,该$ n $引起了指数尺寸的浅神经网络。对于结构化的晶格,我们表明折叠是一种相当于使用深神经网络的技术,使这种复杂性从$ n $ in $ n $中的指数降低到$ n $中的多项式。关于非结构化的MIMO晶格,与密集的晶格相反,CPWL解码功能中的许多零件都可以忽略Gaussian通道上的准典型解码。这使解码问题变得更容易,它解释了为什么使用此类别的晶格(低到中度的尺寸),合理尺寸的浅神经网络更有效。
We characterize the complexity of the lattice decoding problem from a neural network perspective. The notion of Voronoi-reduced basis is introduced to restrict the space of solutions to a binary set. On the one hand, this problem is shown to be equivalent to computing a continuous piecewise linear (CPWL) function restricted to the fundamental parallelotope. On the other hand, it is known that any function computed by a ReLU feed-forward neural network is CPWL. As a result, we count the number of affine pieces in the CPWL decoding function to characterize the complexity of the decoding problem. It is exponential in the space dimension $n$, which induces shallow neural networks of exponential size. For structured lattices we show that folding, a technique equivalent to using a deep neural network, enables to reduce this complexity from exponential in $n$ to polynomial in $n$. Regarding unstructured MIMO lattices, in contrary to dense lattices many pieces in the CPWL decoding function can be neglected for quasi-optimal decoding on the Gaussian channel. This makes the decoding problem easier and it explains why shallow neural networks of reasonable size are more efficient with this category of lattices (in low to moderate dimensions).