论文标题

寻找图形域概括的多样和可预测的子图

Finding Diverse and Predictable Subgraphs for Graph Domain Generalization

论文作者

Yu, Junchi, Liang, Jian, He, Ran

论文摘要

本文着重于由于看不见的分布变化而导致性能下降的图表上的分布概括。以前的图形域概括始终求助于在不同源域之间学习不变的预测因子。但是,他们认为在培训期间有足够的源域,对现实应用构成了巨大挑战。相比之下,我们通过从源域中构造多个种群来提出一个新的图形域概括框架,称为DPS。具体而言,DPS的目的是发现多个\ textbf {d} iverse和\ textbf {p}可折叠\ textbf {s} ubgraphs带有一组生成器的ubgraphs,即,子图是彼此不同的,但它们彼此不同,但它们都与输入图共享相同的语义。这些生成的源域被利用以学习跨域的\ textit {equi-qui-Prestivical}图神经网络(GNN),这有望很好地概括到看不见的目标域。通常,DPS是模型的不合SNOSTIC,可以与各种GNN骨架合并。节点级别和图形基准测试的广泛实验表明,所提出的DPS在各种图形域概括任务方面取得了令人印象深刻的性能。

This paper focuses on out-of-distribution generalization on graphs where performance drops due to the unseen distribution shift. Previous graph domain generalization works always resort to learning an invariant predictor among different source domains. However, they assume sufficient source domains are available during training, posing huge challenges for realistic applications. By contrast, we propose a new graph domain generalization framework, dubbed as DPS, by constructing multiple populations from the source domains. Specifically, DPS aims to discover multiple \textbf{D}iverse and \textbf{P}redictable \textbf{S}ubgraphs with a set of generators, namely, subgraphs are different from each other but all the them share the same semantics with the input graph. These generated source domains are exploited to learn an \textit{equi-predictive} graph neural network (GNN) across domains, which is expected to generalize well to unseen target domains. Generally, DPS is model-agnostic that can be incorporated with various GNN backbones. Extensive experiments on both node-level and graph-level benchmarks shows that the proposed DPS achieves impressive performance for various graph domain generalization tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源