论文标题

网络拓扑变化对信息源本地化的影响

Impact of network topology changes on information source localization

论文作者

Machura, Piotr, Paluch, Robert

论文摘要

在复杂网络中定位信息源的良好方法通常是通过对网络拓扑的完整知识的假设得出的。我们研究了三种此类算法(LPTVA,GMLA和Pearson相关算法)的性能,这些方案无法通过在本地化之前修改网络来实现此假设。这是通过添加多余的新链接,隐藏了现有的链接或根据网络的结构性汉密尔顿人来完成的。我们发现,GMLA对于增加多余的边缘具有很高的弹性,因为仅当链接数量大约翻了一番时,其精度就会降低到统计不确定性的范围更大。另一方面,如果边缘集被低估或重新计算,则GMLA的性能会大大下降。在这种情况下,Pearson算法是可取的,当其他模拟参数有利于本地化(观察者的高密度,高度确定性的传播)时,可以保留大部分性能。通常,它也比LPTVA更准确,而且数量级也快。尽管需要进一步的理论研究,但可以直观地解释上述定位算法之间的差异。

Well-established methods of locating the source of information in a complex network are usually derived with the assumption of complete and exact knowledge of network topology. We study the performance of three such algorithms (LPTVA, GMLA and Pearson correlation algorithm) in scenarios that do not fulfill this assumption by modifying the network prior to localization. This is done by adding superfluous new links, hiding existing ones, or reattaching links in accordance with the network's structural Hamiltonian. We find that GMLA is highly resilient to the addition of superfluous edges, as its precision falls by more than statistical uncertainty only when the number of links is approximately doubled. On the other hand, if the edge set is underestimated or reattachment has taken place, the performance of GMLA drops significantly. In such a scenario the Pearson algorithm is preferable, retaining most of its performance when other simulation parameters favor localization (high density of observers, highly deterministic propagation). It is also generally more accurate than LPTVA, as well as orders of magnitude faster. The aforementioned differences between localization algorithms can be intuitively explained, although a need for further theoretical research is noted.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源