论文标题

Tigger:时间相互作用图的可扩展生成建模

TIGGER: Scalable Generative Modelling for Temporal Interaction Graphs

论文作者

Gupta, Shubham, Manchanda, Sahil, Bedathur, Srikanta, Ranu, Sayan

论文摘要

最近的图形学习生成模型激增。虽然在静态图上取得了令人印象深刻的进展,但在时间图的生成建模上的工作处于新生阶段,具有显着的改进范围。首先,现有的生成模型不会随时间范围或节点的数量扩展。其次,现有技术本质上是跨性的,因此不促进知识转移。最后,由于依赖于从源到生成图的一对一节点映射,现有模型泄漏节点身份信息,并且不允许上缩放/下缩放源图大小。在本文中,我们用一种称为Tigger的新型生成模型弥合了这些缝隙。 Tigger通过时间点过程与自动回归建模的结合来得出其功能,从而实现了转导和感应变体。通过对实际数据集的大量实验,我们建立Tigger产生了优越的忠诚度图,同时也比最新的图表快3个数量级。

There has been a recent surge in learning generative models for graphs. While impressive progress has been made on static graphs, work on generative modeling of temporal graphs is at a nascent stage with significant scope for improvement. First, existing generative models do not scale with either the time horizon or the number of nodes. Second, existing techniques are transductive in nature and thus do not facilitate knowledge transfer. Finally, due to relying on one-to-one node mapping from source to the generated graph, existing models leak node identity information and do not allow up-scaling/down-scaling the source graph size. In this paper, we bridge these gaps with a novel generative model called TIGGER. TIGGER derives its power through a combination of temporal point processes with auto-regressive modeling enabling both transductive and inductive variants. Through extensive experiments on real datasets, we establish TIGGER generates graphs of superior fidelity, while also being up to 3 orders of magnitude faster than the state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源