论文标题
通过人工神经网络的进攻紧凑型二进制的加速多模式引力波形
Accelerating multimodal gravitational waveforms from precessing compact binaries with artificial neural networks
论文作者
论文摘要
来自黑洞和中子星的聚集的引力波为我们提供了独特的机会,以确定源特性,例如它们的质量和旋转,并具有前所未有的精度。但是,为此,发射信号的理论模型是i)非常准确的,ii)计算上必须高效。包括更详细的物理学,例如高阶多尔值和相对论自旋诱导的轨道进动增加了复杂性,因此波形模型的计算成本也为参数推断问题带来了严重的瓶颈。更有效地生成波形的一种流行方法是构建较慢的波形模型的快速替代模型。在本文中,我们表明,与人工神经网络结合使用的传统替代建模方法可用于构建计算高效的,同时仍然准确地模仿了预击二进制黑洞的多极时域波形模型。我们将此方法应用于最先进的波形模型SEOBNRV4PHM并找到了重大的计算改进:在传统的CPU上,使用我们的神经网络代孕Seobnn_v4phm_4dq2的典型一代单个波形,用于二进制黑洞的18mms,总质量为44 m_ _ _ _ {\ odot} $ 20Hz $ 44 m_ _ _ {\ odot} $ 20Hz。与SEOBNRV4PHM本身相比,这相当于将计算效率提高两个数量级。利用额外的GPU加速度,我们发现随着波形的同时产生,可以进一步增加这种加速。即使没有额外的GPU加速度,波形产生成本的这种急剧下降也可以从几周到几个小时内减少推理时间范围。
Gravitational waves from the coalescences of black hole and neutron stars afford us the unique opportunity to determine the sources' properties, such as their masses and spins, with unprecedented accuracy. To do so, however, theoretical models of the emitted signal that are i) extremely accurate and ii) computationally highly efficient are necessary. The inclusion of more detailed physics such as higher-order multipoles and relativistic spin-induced orbital precession increases the complexity and hence also computational cost of waveform models, which presents a severe bottleneck to the parameter inference problem. A popular method to generate waveforms more efficiently is to build a fast surrogate model of a slower one. In this paper, we show that traditional surrogate modelling methods combined with artificial neural networks can be used to build a computationally highly efficient while still accurate emulation of multipolar time-domain waveform models of precessing binary black holes. We apply this method to the state-of-the-art waveform model SEOBNRv4PHM and find significant computational improvements: On a traditional CPU, the typical generation of a single waveform using our neural network surrogate SEOBNN_v4PHM_4dq2 takes 18ms for a binary black hole with a total mass of $44 M_{\odot}$ when generated from 20Hz. In comparison to SEOBNRv4PHM itself, this amounts to an improvement in computational efficiency by two orders of magnitude. Utilising additional GPU acceleration, we find that this speed-up can be increased further with the generation of batches of waveforms simultaneously. Even without additional GPU acceleration, this dramatic decrease in waveform generation cost can reduce the inference timescale from weeks to hours.