论文标题

比例定位的抽象推理

Scale-Localized Abstract Reasoning

论文作者

Benny, Yaniv, Pekar, Niv, Wolf, Lior

论文摘要

我们考虑抽象的关系推理任务,该任务通常用作智能测试。由于某些模式具有空间原理,而另一些模式只是语义,因此我们提出了一个多尺度体系结构,该体系结构在多个分辨率中处理每个查询。我们表明,确实通过不同的决议来解决不同的规则,而合并的多尺度方法在所有基准测试中的现有状态都优于现有的最新状态。我们方法的成功表明是由多个新颖性产生的。首先,它在多种分辨率中搜索关系模式,从而使其可以轻松地检测到视觉关系,例如位置,更高分辨率,同时允许较低的分辨率模块专注于语义关系,例如形状类型。其次,我们根据其绩效成比例地优化每个决议的推理网络,因此我们激发了每个决议,专门研究其性能比其他决议更好的规则,而忽略了其他决议已解决的案例。第三,我们提出了一种新的方法,以沿着查询的插图网格的行和列来汇总信息。我们的工作还分析了现有的基准测试,表明Raven数据集以容易被利用的方式选择负面示例。因此,我们提出了一个名为Raven-Fair的Raven数据集的修改版本。我们的代码和预估计的模型可在https://github.com/yanivbenny/mrnet上找到。

We consider the abstract relational reasoning task, which is commonly used as an intelligence test. Since some patterns have spatial rationales, while others are only semantic, we propose a multi-scale architecture that processes each query in multiple resolutions. We show that indeed different rules are solved by different resolutions and a combined multi-scale approach outperforms the existing state of the art in this task on all benchmarks by 5-54%. The success of our method is shown to arise from multiple novelties. First, it searches for relational patterns in multiple resolutions, which allows it to readily detect visual relations, such as location, in higher resolution, while allowing the lower resolution module to focus on semantic relations, such as shape type. Second, we optimize the reasoning network of each resolution proportionally to its performance, hereby we motivate each resolution to specialize on the rules for which it performs better than the others and ignore cases that are already solved by the other resolutions. Third, we propose a new way to pool information along the rows and the columns of the illustration-grid of the query. Our work also analyses the existing benchmarks, demonstrating that the RAVEN dataset selects the negative examples in a way that is easily exploited. We, therefore, propose a modified version of the RAVEN dataset, named RAVEN-FAIR. Our code and pretrained models are available at https://github.com/yanivbenny/MRNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源