数据密集型 通信密集型_经典数据的超密集编码

数据密集型 通信密集型

My immediately-previous article titled, “130,780-point Quantum Classification,” the circuit of which used 20 qubits to map all that data, resulted in a Twitter thread that inspired me to think about by how much I could reduce that qubit count. It honestly hadn’t been much of a thought with that circuit; I was focused on achieving accurate classification results with a decent-sized real-world dataset. However, it is fair to suggest that the first encoding that works is not necessarily going to be the optimal encoding strategy.

我的前一篇文章名为“ 130,780点量子分类 ”,其电路使用20个量子位来映射所有数据,导致了Twitter线程激发了我的思考,以考虑我可以减少多少量子位。 老实说,对于这个电路没有太多的想法。 我专注于使用像样的现实世界数据集来获得准确的分类结果。 但是,很公平地建议起作用的第一个编码不一定是最佳编码策略。

介绍 (Introduction)

One of the many challenges of quantum computing is mapping classical data to quantum bits, aka “qubits.” In my previous article, the dataset had five features (aka, columns of data) and four classes. The easiest approach was to map each feature of each class to one qubit, resulting in 5 x 4 = 20 qubits. I had already considered the possibility of mapping two features per qubit, but, again, I kept it simple to verify that the algorithm was actually working.

量子计算的众多挑战之一是将经典数据映射到量子位(也称为“量子位”)。 在我的上一篇文章中,数据集具有五个功能(又称数据列)和四个类。 最简单的方法是将每个类的每个特征映射到一个量子比特,从而得到5 x 4 = 20量子比特。 我已经考虑过每个qubit映射两个特征的可能性,但是,我再次简化了算法的实际工作。

Keep in mind, by the way, that each qubit represented thousands of rows of classical data. Each vector represented the normalized mean (average) of one column from one class.

顺便说一下,请记住,每个量子位代表数千行经典数据。 每个向量代表一类中一列的归一化平均值(平均值)。

原始电路 (The Original Circuit)

Although you don’t see 20 data qubits below, you see 5 data qubits (the top 5 qubits) being reset and reused 3 times each. This is because the simulator slows down significantly as you add qubits to your circuit, until it eventually reaches a runtime error sometime after two hours (for IBM Q Experience, anyway). Resetting and reusing less qubits, however, allows far more computation to be done.

尽管下面没有看到20个数据qubit,但是您看到5个数据qubit(前5个qubit)被重置并分别重复使用3次。 这是因为,当您向电路中添加qubit时,模拟器会显着降低速度,直到最终在两个小时后的某个时间出现运行时错误(无论如何对于IBM Q Experience)。 但是,重置和重用较少的量子位可以进行更多的计算。

Image for post

A unique feedback for this circuit, compared to all others I’ve published, is that the use of OpenQASM subroutines compacts the circuit so much that it doesn’t seem to be actually doing much, especially not anything “quantum.” And, while the “set” boxes are merely resetting and reusing qubits, the “test” boxes are concealing very-quantum SWAP Tests. Consequently, even though subroutines make coding simpler and easier, I will forego them going forward so that future circuits openly show their component gates.

与我已发布的所有其他电路相比,该电路的独特反馈是,使用OpenQASM子例程会压缩电路,以至于它实际上并没有做太多事情,尤其是没有任何“量子”。 而且,虽然“设置”框仅是重置和重用量子位,但“测试”框隐藏了非常量子的SWAP测试。 因此,即使子例程使编码变得更简单容易,我也将放弃它们,以便将来的电路公开显示其组件门。

And, for those who haven’t read my previous article, the reason why all circuits with SWAP Tests have to be run on simulators is because Fredkin gates transpile into a significant number of gates. This problem of circuit depth is exacerbated by the limited connectivity of real qubits, which causes CNOTs to add even more circuit depth to the already significant step count. By the time measurements are taken, decoherence has made the results worthless.

而且,对于那些尚未阅读我上一篇文章的人来说,所有带有SWAP测试的电路都必须在模拟器上运行的原因是,弗雷德金(Fredkin)闸门可转换成大量闸门。 实际量子位的有限连通性加剧了电路深度的问题,这导致CNOT将更多的电路深度添加到已经很重要的步数上。 在进行测量时,退相干使结果变得毫无价值。

降维 (Dimensionality Reduction)

Each column of the dataset is a dimension of the data, so we start off with five dimensions. A qubit, however, is not 5-dimensional, so we need some kind of dimension reduction strategy to map this dataset to less then 20 qubits. Using rotations and quantum state tomography, we can map the data to ten qubits, which is a noticeable reduction. However, we can do even better if we use classical preprocessing.

数据集的每一列都是数据的一个维度,因此我们从五个维度开始。 但是,一个量子比特不是5维的,因此我们需要某种降维策略才能将此数据集映射到少于20个量子比特。 使用旋转和量子状态层析成像 ,我们可以将数据映射到十个量子位,这是一个明显的减少。 但是,如果使用经典预处理,我们甚至可以做得更好。

This algorithm is very loosely inspired by Principle Component Analysis (PCA). You might not see any similarity at all, but thinking about how to reduce a dataset to two dimensions helped me further reduce five-dimensional data to only one dimension.

该算法非常不受主成分分析(PCA)的启发。 您可能根本看不到任何相似之处,但是思考如何将数据集缩减为二维有助于我进一步将五维数据缩减为仅一维。

双振幅编码 (Double Amplitude Encoding)

If you watch the video, “Quantum Machine Learning - 24 - Encoding Classical Information,” Dr. Wittek describes a variation of my encoding strategy. I don’t actually encode amplitudes, however, because I’m not measuring the encoded qubits. Furthermore, amplitudes are less sensitive near the tops and bottoms of Bloch spheres, and more sensitive near the middles, which would seem to skew the results of the SWAP Tests.

如果您观看“ 量子机器学习 -24- 编码经典信息 ”视频,Wittek博士将介绍我的编码策略的一种变化。 但是,实际上我并未对幅度进行编码,因为我没有测量编码的量子位。 此外,振幅在Bloch球的顶部和底部附近不太敏感,而在中间附近更敏感,这似乎会使SWAP测试的结果产生偏差。

Instead, I use fractions of pi. I want the amount of each rotation to be consistent, regardless of where we are on the Bloch sphere.

相反,我使用pi的分数。 无论我们在Bloch球面上处于什么位置,我都希望每次旋转的量保持一致。

算法 (Algorithm)

I am admittedly not a mathematician, so there may be a shortcut to the dimensionality reduction portion of this algorithm. As previously demonstrated in my last article, among others, I focus on getting my algorithms to work. If they work, I publish what I have and then work on optimization in subsequent projects.

我当然不是数学家,因此该算法的降维部分可能会有捷径。 正如我在上一篇文章中所展示的,除其他外,我专注于使算法起作用。 如果它们有效,我将发布我所拥有的,然后在后续项目中进行优化。

  1. For each class, normalize the mean (average) of each feature (column) using the formula (( mean - min ) / ( max - min )), where the max and min are the global maxima and minima

    对于每个类别,使用公式((mean-min)/(max-min))归一化每个特征(列)的平均值(平均值),其中max和min是全局最大值和最小值
  2. For each class, sum the normalized means above

    对于每个类别,将上述归一化的平均值相加
  3. Normalize the sums from step 2 using the formula in step 1; the global maximum and global minimum are once again from the entire dataset, so that all test values we might map will stay within the range of 0 to 1

    使用步骤1中的公式对步骤2中的总和进行归一化; 全局最大值和全局最小值再次来自整个数据集,因此我们可能映射的所有测试值将保持在0到1的范围内
  4. Multiply each of the normalized sums by pi

    将每个归一化的总和乘以pi

The end result of this algorithm is that each class is represented by a single value, ranging from 0% to 100% the value of pi.

该算法的最终结果是,每个类都由单个值表示,范围从pi的0%到100%。

问题 (Problem)

With four classes, we can use U3 gates to map two classes per qubit. This is the circuit shown at the very top of this article. Each qubit has a 0-to-pi rotation around the y axis and a 0-to-pi rotation around the z axis.

对于四个类,我们可以使用U3门将每个qubit映射两个类。 这是本文最顶部显示的电路。 每个量子位具有围绕y轴的0到pi旋转和围绕z轴的0到pi旋转。

The problem is that this data is too compressed to be useful. Or, at least, I haven’t thought of a way to use it yet. On Twitter, I likened it to using a ZIP file, with which you have to extract the contents before they become useable.

问题在于该数据过于压缩而无法使用。 或者,至少,我还没有想到使用它的方法。 在Twitter上,我将其比喻为使用ZIP文件,您必须先使用ZIP文件提取内容,然后内容才可用。

However, extracting the contents through quantum state tomography hasn’t been successful, otherwise I would be showing that circuit here. Furthermore, a quantum ZIP file doesn’t make sense, if for nothing else, for the added circuit depth. It makes sense to just start off with the minimum number of useful qubits.

但是,通过量子态层析成像提取内容并没有成功,否则我将在这里展示该电路。 此外,对于增加的电路深度,量子ZIP文件毫无意义(如果没有其他意义)。 仅从最小数量的有用qubit开始是有意义的。

最佳选择 (The Best Alternative)

For the four classes of data in the given dataset, the minimum number of data qubits needed seems to be four, or one class per data qubit. Instead of using U3 gates, we can use RY gates.

对于给定数据集中的四类数据,所需的数据量子位的最小数量似乎是四,或者每个数据量子位一个类。 代替使用U3门,我们可以使用RY门。

Image for post

The objective of the previous article’s circuit was quantum classification, so here is a revised version of that algorithm. The first four qubits represent the four classes of data, the middle four qubits represent the new data point that we want to classify, and the bottom four qubits are the ancilla qubits for the SWAP Tests.

上一篇文章的电路的目标是量子分类,因此这里是该算法的修订版。 前四个qubit代表四类数据,中间的四个qubit代表我们要分类的新数据点,而后四个qubit是SWAP测试的辅助qubit。

The test data qubits may seem redundant, but that redundancy is necessary because we are running four different SWAP Tests. We could, in this case, use reset gates on one simulated qubit, but I would rather make it clear that each class comparison requires its own class data qubit, test data qubit, and ancilla qubit.

测试数据qubit似乎是多余的,但是该冗余是必需的,因为我们正在运行四个不同的SWAP测试。 在这种情况下,我们可以在一个模拟的量子位上使用复位门,但我宁愿明确地说,每个类比较都需要其自己的类数据量子位,测试数据量子位和辅助量子位。

结果 (Results)

The SWAP Test measures |0> with a probability of 1 when quantum states are identical and measures |0> with a probability of 0.5 when quantum states are maximally opposite (on opposite sides of the Bloch sphere). A measurement of 0.9, for example, indicates that states are relatively close together. Therefore, the class with the highest probability of measuring |0> is the closest class to the test data point.

当量子状态相同时,SWAP测试以| 1的概率测量| 0>,而在量子状态最大相反时(在Bloch球的相对侧),以| 0.5>的概率测量| 0>。 例如,测量值为0.9表示状态相对靠近。 因此,测量| 0>的可能性最高的类别是最接近测试数据点的类别。

Image for post

If you do the classical post-processing, the highest probability of measuring |0> is class 0, which is good since that’s where I took the test data point from. In the interest of full disclosure, the data, and even the classes, have a lot of overlap so it is very easy to get incorrect results. However, because it is the data that has the overlap, we should see the same incorrect results using scikit-learn or other methods.

如果执行经典的后处理,则测量| 0>的最高概率为0类,这很好,因为这是我从中获取测试数据点的地方。 为了完全公开,数据,甚至是类,都有很多重叠,因此很容易获得错误的结果。 但是,由于是重叠的数据,因此使用scikit-learn或其他方法应该会看到相同的错误结果。

结论 (Conclusion)

The circuit at the top of this article shows 130,780 data points mapped to only two qubits. And although I don’t currently have an algorithm to work with that level of compression, it’s not necessarily impossible.

本文顶部的电路显示了130,780个数据点仅映射到两个qubit。 而且,尽管我目前还没有一种算法可以处理这种压缩级别,但这并不一定是不可能的。

Using a little less compression, and mapping the same data to only four qubits, we can still run algorithms such as quantum classification. This is a dramatic reduction from the 20 qubits in my previous implementation.

使用较少的压缩,并将相同的数据映射到仅四个量子位,我们仍然可以运行诸如量子分类的算法。 与我以前的实现中的20个量子位相比,这是一个大大的减少。

未来的工作 (Future Work)

I’m still thinking of ways to use the maximally-compressed data, and I hope to share an epiphany about it in a future article. I also look forward to someday running this algorithm on real hardware. Notwithstanding I could run it right now on ibmq_16_melbourne, there is no way to optimize the circuit for its qubit connectivity and the results would be a decoherent (if that’s a real word) mess.

我仍在思考使用最大压缩数据的方法,并且希望在以后的文章中分享对此的顿悟。 我也期待有一天在实际硬件上运行此算法。 尽管我现在可以在ibmq_16_melbourne上运行它,但是没有办法针对其qubit连接性优化电路,并且结果将是一个退相干(如果是真实的话)。

致谢 (Acknowledgements)

Thanks to IBM, as always, for the qubits, whether real or simulated. All images in this article are from IBM Q Experience. Thanks, also, to Quantum Intuition (@explore_quantum) and Quantum Steve (@steve_quantum) for not being impressed with mapping 130,780 data points to 20 qubits.

一如既往,感谢IBM提供的量子位,无论是真实的还是模拟的。 本文中的所有图像均来自IBM Q Experience。 还要感谢Quantum Intuition ( @explore_quantum )和Quantum Steve ( @steve_quantum )并没有给将130,780个数据点映射到20个量子位留下深刻的印象。

翻译自: https://towardsdatascience.com/superdense-encoding-of-classical-data-5a2ef02d09d8

数据密集型 通信密集型

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值