量子相干与量子纠缠
My goal here was to build a quantum deep neural network for classification tasks, but all the effort involved in calculating errors, updating weights, training a model, and so forth turned out to be completely unnecessary. The above circuit is much simpler than it must already look, and I am going to fully break it down for you.
我的目标是建立一个用于分类任务的量子深层神经网络,但是计算误差,更新权重,训练模型等所有工作完全没有必要。 上面的电路比必须已经看起来的要简单得多,我将为您详细介绍一下。
Disclaimer
免责声明
This circuit is intentionally not optimized. Rather, it is intended to be comprehensible. I intend to address optimization as I add complexity to future circuits, which will have their own associated articles.
该电路有意未进行优化。 相反,其意图是易于理解的。 我打算解决优化问题,因为这会增加将来的电路的复杂性,这些电路将有自己的相关文章。
Background
背景
This origin of this classification task is a very simple neural network that had been written in Python. Long ago, I rewrote this neural network in C to force me to better understand how it worked. Without the use of NumPy, in particular, I had to write all the functions from scratch (I avoided potentially-helpful C libraries, as well). Armed with this relatively-deep understanding, I selected this same neural network to translate further from C into OpenQASM.
分类任务的起源是一个非常简单的神经网络,该网络已经用Python编写。 很久以前,我用C语言重写了这个神经网络,以迫使我更好地了解它的工作原理。 特别是在不使用NumPy的情况下,我不得不从头开始编写所有功能(我也避免了可能有用的C库)。 有了相对较深的理解,我选择了相同的神经网络将其从C进一步转换为OpenQASM。
Registers
寄存器
This circuit uses four registers. The “a” register consists of two ancilla qubits, each paired up with one qubit from the two-qubit “data” register. The “train” register consists of the training data from the original neural network in Python; the data is mapped to 11 qubits. And, of course, there is a classical register for taking measurements.
该电路使用四个寄存器。 “ a”寄存器由两个辅助量子位组成,每个辅助量子位与两个量子位“数据”寄存器中的一个量子位配对。 “训练”寄存器由Python中原始神经网络的训练数据组成; 数据被映射到11个量子位。 当然,还有一个经典的寄存器可以进行测量。
The reason for the two ancilla qubits and the two data qubits is that the original neural network had only two classifications, represented numerically by 0 and 1. One ancilla-data pair is used to compare the test state to the training data that is classified as 0, and the other ancilla-data pair is used to compare the test state to the training data that is classified as 1.
使用两个辅助量子位和两个数据量子位的原因是,原始神经网络只有两个分类,用数字0和1表示。一个辅助数据对用于将测试状态与分类为的训练数据进行比较0,另一个辅助数据对用于将测试状态与分类为1的训练数据进行比较。
Initial States
初始状态
The ancilla qubits are initialized with Hadamard gates, the first operation when performing SWAP Tests, which are used to compare quantum states.
使用Hadamard门初始化辅助量子位,这是执行SWAP测试时的第一个操作,用于比较量子态。
Read more about SWAP Tests:
进一步了解SWAP测试:
The data qubits are prepared identically with simple rotations around the y axis. The training data is also mapped with y rotations, except for one qubit which remains in it’s ground state and one which has a Pauli-X (NOT) gate applied to it.
围绕y轴进行简单旋转即可完全相同地准备数据量子位。 训练数据还映射了y旋转,除了一个qubit保持其基态,另一个保留了Pauli-X(NOT)门。
Normalization
正常化
The reason why the data qubits and training qubits can be mapped with y rotations is because the original data contained integer values that had to be normalized between zero and one. If you have values ranging from 0 to 360, the 360 would be normalized to 1, 180 would be normalized to 0.5, 90 would be normalized to 0.25, and so forth. I took the normalized values from my C language implementation and converted them to y-axis rotations.
可以使用y旋转映射数据qubit和训练qubit的原因是,原始数据包含必须在0到1之间归一化的整数值。 如果您的值介于0到360之间,则将360标准化为1,将180标准化为0.5,将90标准化为0.25,依此类推。 我从C语言实现中获取了标准化的值,并将其转换为y轴旋转。
Calculating Theta
计算θ
Calculating the angle of rotation around the y axis is normally a matter of trigonometry, but not in this case. I did not want the normalized values to be converted into probabilities of measuring |1> because that would cause states closer to |0> and |1> to seem closer together than states near the equator of the Bloch Sphere. For purposes of SWAP Testing, the distance between 0 and 1 has to be the same as the distance between 49 and 50. Therefore, each qubit’s rotation around the y axis is merely the classical normalized value multiplied times pi.
计算绕y轴的旋转角度通常是三角问题,但在这种情况下不是。 我不希望将规范化的值转换为测量| 1>的概率,因为这将导致比| Bloch球的赤道附近的状态更接近| 0>和| 1>的状态在一起。 为了进行SWAP测试,0和1之间的距离必须与49和50之间的距离相同。因此,每个qubit绕y轴的旋转仅仅是经典归一化值乘以pi。
Controlled-SWAPs
受控交换
SWAP Tests begin by applying Hadamard gates to the ancilla qubits. These are followed by Fredkin gates, which are controlled-SWAP gates. The ancilla qubits are the control qubits. For additional detail, I refer again to the links I provided earlier.
SWAP测试通过将Hadamard门应用到辅助量子位开始。 这些之后是Fredkin门,它们是受控SWAP门。 辅助量子位是控制量子位。 有关更多详细信息,请再次参考我之前提供的链接。
I went in simple numerical order for viewability. If the training data is classified as 0, the Fredkin gate takes a[0] as it’s control and compares data[0] to that training qubit. If the training data is classified as 1, the Fredkin gate takes a[1] as it’s control and compares data[1] to that training qubit. In other words, a[0] and data[0] are being used to compare the test state to all the training data that is classified as 0 and a[1] and data[1] are being used to compare the test state to all the training data that is classified as 1.
我以简单的数字顺序查看。 如果训练数据被分类为0,则Fredkin门将a [0]作为控制,并将data [0]与该训练量子位进行比较。 如果训练数据分类为1,则Fredkin门将a [1]作为控制,并将data [1]与该训练量子位进行比较。 换句话说,a [0]和data [0]用于将测试状态与分类为0的所有训练数据进行比较,而a [1]和data [1]用于将测试状态与进行比较的测试状态进行比较。所有分类为1的训练数据。
Finalizing the SWAP Tests
完成SWAP测试
SWAP Tests are finalized by taking x measurements of the ancilla qubits. The x measurements are distinguishable from the usual z measurements by the presence of Hadamard gates that are applied immediately preceding the measurements.
通过对辅助量子比特进行x次测量来完成SWAP测试。 x测量值与通常的z测量值的区别在于存在哈达玛德门(Hadamard gate),这些门紧接在测量之前被应用。
Measurements
测量
Measuring the ancilla qubits provides the distance between the test data and the training data. You measure |0> with a probability of 1 when states are identical and you measure |0> with a probability of 0.5 when states are maximally different. The a[0] qubit measures the distance between the test data and the training data that is classified as 0, and the a[1] qubit measures the distance between the test data and the training data that is classified as 1.
测量辅助量子位可提供测试数据和训练数据之间的距离。 当状态相同时,以0的概率测量| 0>,而在状态最大不同时,以0.5的概率测量| 0>。 a [0]量子位测量测试数据和分类为0的训练数据之间的距离,而a [1]量子位测量测量数据与分类为1的训练数据之间的距离。
Classification
分类
For this article, I selected a value for the test data that should result in it being classified as a 1. That is to say that the same value is determined to be probably 1 when you run it classically. And, according to the histogram, the ancilla qubit representing the 1 classification did, in fact, have a higher probability of being measured as |0> than the 0 classification. This means that the test data is closer to the training data that is classified as 1 than it is to the training data that is classified as 0.
在本文中,我为测试数据选择了一个值,该值应将其分类为1。也就是说,当您经典运行它时,确定该值可能为1。 并且,根据直方图,代表1分类的辅助量子比特实际上被测量为| 0>的概率要大于0分类。 这意味着测试数据离分类为1的训练数据更近,而离分类为0的训练数据更近。
Future Work
未来的工作
The original neural network was slightly more complex. It actually used three features to distinguish the two classes, but I only used one of those features here. Therefore, a logical next step would be to perform quantum classification using multiple features. Beyond that, another logical step would be to allow more than just two classes, however that would require changing the classical model that the circuit is based on; at this stage, it is important to know that the quantum result is aligned with the classical result, especially as the quantum circuit grows in complexity.
原始的神经网络稍微复杂一些。 它实际上使用了三个功能来区分这两个类,但是我在这里只使用了其中一个功能。 因此,逻辑上的下一步将是使用多个特征执行量子分类。 除此之外,另一个逻辑步骤是允许不止两个类别,但是这将需要更改电路所基于的经典模型; 在这一阶段,重要的是要知道量子结果与经典结果一致,尤其是随着量子电路复杂性的增长。
Acknowledgment
致谢
This circuit was written in OpenQASM using the IBM Q Experience circuit editor, and it ran on the provided 32-qubit simulator.
该电路是使用IBM Q Experience电路编辑器以OpenQASM编写的,并在提供的32量子位模拟器上运行。
翻译自: https://medium.com/swlh/quantum-classification-cecbc7831be
量子相干与量子纠缠