论文标题
切换张量网络量子机器学习模型
Decohering Tensor Network Quantum Machine Learning Models
论文作者
论文摘要
张量网络量子机学习(QML)模型是近期量子硬件的有希望的应用程序。虽然预计量子位的破坏性会降低QML模型的性能,但尚不清楚通过向模型中添加Ancillas并因此增加模型的虚拟债券维度可以在多大程度上补偿性能下降的程度。我们在这里调查了逆转与添加两个模型的分类性能之间的竞争,从回归的角度分析了矫正效果。我们提供了充分的证据表明,具有两个Ancillas的完全定位的统一树张量网络(TTN)至少和未固定的单一统一TTN具有性能,这表明在统一TTN中至少添加两个Ancillas是有益的,无论导致脱骨量如何。
Tensor network quantum machine learning (QML) models are promising applications on near-term quantum hardware. While decoherence of qubits is expected to decrease the performance of QML models, it is unclear to what extent the diminished performance can be compensated for by adding ancillas to the models and accordingly increasing the virtual bond dimension of the models. We investigate here the competition between decoherence and adding ancillas on the classification performance of two models, with an analysis of the decoherence effect from the perspective of regression. We present numerical evidence that the fully-decohered unitary tree tensor network (TTN) with two ancillas performs at least as well as the non-decohered unitary TTN, suggesting that it is beneficial to add at least two ancillas to the unitary TTN regardless of the amount of decoherence may be consequently introduced.