论文标题
量子电路出生的机器是否概括?
Do Quantum Circuit Born Machines Generalize?
论文作者
论文摘要
在最近针对生成任务的量子电路模型的建议中,关于其性能的讨论仅限于它们重现已知目标分布的能力。例如,诸如量子电路出生机(QCBM)之类的表达模型家族几乎已经完全评估了其以高精度学习给定目标分布的能力。尽管这方面可能是某些任务的理想选择,但它将生成模型的评估范围限制在记忆数据而不是概括的能力上。结果,对模型的概括性能以及此类功能和资源需求之间的关系几乎没有理解,例如电路深度和培训数据的量。在这项工作中,我们利用最近提出的概括评估框架开始解决这一知识差距。我们首先研究了QCBM的基数受限分布的学习过程,并在增加电路深度的同时看到概括性能的提高。在此处介绍的12个问题示例中,我们观察到,训练集中只有30%的有效数据,QCBM表现出最佳的概括性能,以生成看不见和有效的数据。最后,我们评估了QCBM不仅将有效样本概括的能力,而且还评估了根据充分重新加权分布分布的高质量斑点。我们看到,QCBM能够有效地学习重新加权的数据集并生成比培训集中质量更高的看不见的样本。据我们所知,这是文献中的第一部作品,该作品将QCBM的概括性能作为量子生成模型的积分评估度量标准,并证明了QCBM推广到高质量的,所需的新型样品的能力。
In recent proposals of quantum circuit models for generative tasks, the discussion about their performance has been limited to their ability to reproduce a known target distribution. For example, expressive model families such as Quantum Circuit Born Machines (QCBMs) have been almost entirely evaluated on their capability to learn a given target distribution with high accuracy. While this aspect may be ideal for some tasks, it limits the scope of a generative model's assessment to its ability to memorize data rather than generalize. As a result, there has been little understanding of a model's generalization performance and the relation between such capability and the resource requirements, e.g., the circuit depth and the amount of training data. In this work, we leverage upon a recently proposed generalization evaluation framework to begin addressing this knowledge gap. We first investigate the QCBM's learning process of a cardinality-constrained distribution and see an increase in generalization performance while increasing the circuit depth. In the 12-qubit example presented here, we observe that with as few as 30% of the valid data in the training set, the QCBM exhibits the best generalization performance toward generating unseen and valid data. Lastly, we assess the QCBM's ability to generalize not only to valid samples, but to high-quality bitstrings distributed according to an adequately re-weighted distribution. We see that the QCBM is able to effectively learn the reweighted dataset and generate unseen samples with higher quality than those in the training set. To the best of our knowledge, this is the first work in the literature that presents the QCBM's generalization performance as an integral evaluation metric for quantum generative models, and demonstrates the QCBM's ability to generalize to high-quality, desired novel samples.