论文标题
从几个具有非线性特征图的示例中学习
Learning from few examples with nonlinear feature maps
论文作者
论文摘要
在这项工作中,我们考虑了在经典之后的设置中数据分类的问题是培训示例的数量仅包含很少的数据点。我们探讨了现象,并揭示了AI模型特征空间的维度,数据分布的非分类性与模型的概括能力之间的关键关系。我们目前的分析的主要目的是,非线性特征转换将原始数据映射到结果模型的概括能力上的原始数据中的影响。根据适当的假设,我们在转换数据的内在维度与从少数演示文稿中成功学习的概率之间建立了新的关系。
In this work we consider the problem of data classification in post-classical settings were the number of training examples consists of mere few data points. We explore the phenomenon and reveal key relationships between dimensionality of AI model's feature space, non-degeneracy of data distributions, and the model's generalisation capabilities. The main thrust of our present analysis is on the influence of nonlinear feature transformations mapping original data into higher- and possibly infinite-dimensional spaces on the resulting model's generalisation capabilities. Subject to appropriate assumptions, we establish new relationships between intrinsic dimensions of the transformed data and the probabilities to learn successfully from few presentations.