论文标题

近期量子设备上噪声预测的深度学习模型

A deep learning model for noise prediction on near-term quantum devices

论文作者

Zlokapa, Alexander, Gheorghiu, Alexandru

论文摘要

我们提出了一种量子电路的深度学习编译器的方法,该方法旨在减少在特定设备上运行的电路的输出噪声。我们从量子设备上的实验数据上训练卷积神经网络,以学习特定于硬件的噪声模型。然后,编译器将训练有素的网络用作噪声预测指标,并在电路中插入大门序列,以最大程度地减少预期噪声。我们在IBM 5 QUIT设备上测试了这种方法,并观察到与Qiskit Compiler获得的电路相比,输出噪声降低了12.3%(95%CI [11.5%,13.0%])。此外,受过训练的噪声模型是特定于硬件的:在另一台设备上训练的噪声模型只能减少5.2%(95%CI [4.9%,5.6%])。这些结果表明,使用机器学习的设备特异性编译器可能会产生更高的保真度操作,并为噪声模型设计提供见解。

We present an approach for a deep-learning compiler of quantum circuits, designed to reduce the output noise of circuits run on a specific device. We train a convolutional neural network on experimental data from a quantum device to learn a hardware-specific noise model. A compiler then uses the trained network as a noise predictor and inserts sequences of gates in circuits so as to minimize expected noise. We tested this approach on the IBM 5-qubit devices and observed a reduction in output noise of 12.3% (95% CI [11.5%, 13.0%]) compared to the circuits obtained by the Qiskit compiler. Moreover, the trained noise model is hardware-specific: applying a noise model trained on one device to another device yields a noise reduction of only 5.2% (95% CI [4.9%, 5.6%]). These results suggest that device-specific compilers using machine learning may yield higher fidelity operations and provide insights for the design of noise models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源