无监督学习 k-means_无监督学习-第1部分

无监督学习 k-means

有关深层学习的FAU讲义 (FAU LECTURE NOTES ON DEEP LEARNING)

These are the lecture notes for FAU’s YouTube Lecture “Deep Learning”. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. Try it yourself! If you spot mistakes, please let us know!

这些是FAU YouTube讲座“ 深度学习 ”的 讲义 这是演讲视频和匹配幻灯片的完整记录。 我们希望您喜欢这些视频。 当然,此成绩单是使用深度学习技术自动创建的,并且仅进行了较小的手动修改。 自己尝试! 如果发现错误,请告诉我们!

导航 (Navigation)

Previous Lecture / Watch this Video / Top Level / Next Lecture

上一个讲座 / 观看此视频 / 顶级 / 下一个讲座

Welcome back to deep learning! So today, we want to talk about unsupervised methods and in particular, we will focus on autoencoders and GANs in the next couple of videos. We will start today with the basics, the motivation, and look into one of the rather historical methods — the restricted Boltzmann machines. We still mention them here, because they are kind of important in terms of the developments towards unsupervised learning.

欢迎回到深度学习! 因此,今天,我们想谈谈无监督方法,尤其是在接下来的两个视频中,我们将重点介绍自动编码器和GAN。 今天,我们将从基础知识,动机开始,并研究一种相当古老的方法-受限的Boltzmann机器。 我们在这里仍然提到它们,因为就无监督学习的发展而言,它们是很重要的。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

So, let’s see what I have here for you. So, the main topic as I said is unsupervised learning. Of course, we start with our motivation. So, you see that the data sets we’ve seen so far, they are huge they had up to millions of different training observations, many objects, and in particular few modalities. Most of the things we’ve looked at where essentially camera images. There may have been different cameras that have been used but typically only one or two modalities that were in one single dataset. However, this is not generally the case. For example, in medical imaging, you have typically very small data sets, maybe 30 to 100 patients. You have only one complex object that is the human body and many different modalities from MR, X-ray, to ultrasound. All of them have a very different appearance which means that they also have different requirements in terms of their processing. So why is this the case? Well in Germany, we actually have 65 CT scans per thousand inhabitants. This means that in 2014 alone, we had five million CT scans in Germany. So, there should be plenty of data. Why can’t we use all of this data? Well, these data are, of course, sensitive and they contain patient health information. So, for example, if you have a scan that contains the head in a CT scan, then you can render the surface of the face and you can even use an automatic system to determine the identity of this person. There are also non-obvious cues. So, for example, if you have the surface of the brain, the surface is actually characteristic for a certain person. You can identify persons by the shape of their brain to an accuracy of up to 99 percent. So, you see that this is indeed highly sensitive data. If you share whole volumes, people may be able to identify the person, although, you may argue that it’s difficult to identify a person from a single slice image. So, there are some trends to make data like this available. But still, you have the problem even if you have the data, you need labels. So, you need experts who look at the data and tell you what kind of disease is present, which anatomical structure is where, and so on. This is also very expensive to obtain.

所以,让我们看看我在这里为您准备的。 因此,正如我所说,主要主题是无监督学习。 当然,我们从动力开始。 因此,您可以看到到目前为止我们已经看到的数据集非常庞大,它们拥有多达数百万种不同的训练观察结果,许多对象,尤其是很少的模态。 我们看过的大多数东西本质上都是相机图像。 可能使用了不同的相机,但通常在一个数据集中只有一种或两种方式。 但是,通常情况并非如此。 例如,在医学成像中,您通常拥有非常小的数据集,可能有30至100位患者。 您只有一个复杂的物体即人体,并且具有从MR,X射线到超声的多种不同方式。 它们的外观都非常不同,这意味着它们在处理方面也有不同的要求。 那么为什么会这样呢? 在德国,我们实际上每千名居民进行65次CT扫描。 这意味着仅在2014年,我们在德国就进行了500万次CT扫描。 因此,应该有大量数据。 为什么我们不能使用所有这些数据? 好吧,这些数据当然是敏感的,并且包含患者的健康信息。 因此,例如,如果您的CT扫描中包含头部, 则可以渲染脸部表面,甚至可以使用自动系统来确定此人的身份。 也有不明显的提示。 因此,例如,如果您有大脑的表面,则该表面实际上是某个人的特征。 您可以根据他们的大脑形状来识别人员, 准确度最高可达99% 。 因此,您看到这确实是高度敏感的数据。 如果共享全部内容,尽管人们可能会认为很难从单个切片图像中识别出一个人,但人们可能会识别出该人。 因此,存在使这种数据可用的趋势。 但是,即使有数据,也需要标签。 因此,您需要专家来查看数据并告诉您当前存在哪种疾病,哪种解剖结构在哪里等等。 获得这也是非常昂贵的。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

So, it would be great if we had methods that could work with very few annotations or even no annotations. I have some examples here that go in this direction. One trend is weakly supervised learning. So, here you have a label for related tasks. The example that we show here is the localization from the class label. So let’s say, you have images and you have classes like brushing teeth or cutting trees. Then, you can use these plus the associated gradient information, like using visualization mechanisms, and you can localize the class in that particular image. This is a way how you can get a very cheap label, for example, for bounding boxes. There are also semi-supervised techniques where you have very little labeled data and you try to apply it to a larger data set. So, the typical approach here would be things like bootstrapping. You create a weak classifier from a small labeled data set. Then, you apply it to a large data set and you try to estimate which of the data points in that large data set have been classified reliably. Next, you take the reliable ones into a new training set and with the new training set, you then start over again trying to build a new system. Finally, you iterate until you have a better system.

因此,如果我们拥有可以使用很少注释或什至没有注释的方法,那就太好了。 我这里有一些朝这个方向发展的例子。 一种趋势是弱监督学习。 因此,这里有一个相关任务的标签。 我们在此显示的示例是类标签的本地化。 假设您有图像,并且有刷牙或砍树之类的课程。 然后,您可以使用这些信息以及关联的渐变信息(例如使用可视化机制),并且可以在该特定图像中定位该类。 这是一种获得非常便宜的标签(例如用于边界框)的方法。 还有半监督技术,在这些技术中,标记数据很少,然后尝试将其应用于更大的数据集。 因此,此处的典型方法是类似引导程序。 您从一个小的标签数据集创建一个弱分类器。 然后,将其应用于大型数据集,并尝试估计该大型数据集中的哪些数据点已可靠分类。 接下来,将可靠的培训带入新的培训集中,并使用新的培训集,然后重新尝试构建新的系统。 最后,您进行迭代直到拥有更好的系统。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

Of course, there are also unsupervised techniques where you don’t need any labeled data. This will be the main topic of the next couple of videos. So let’s have a look at label-free learning. One typical application here is dimensionality reduction. Here, you have an example where data is on a high dimensional space. We have a 3-D space. Actually, we’re just showing you one slice through this 3-D space. You see that the data is rolled up and we identify similar points by similar color in this image. You can see this 3-D manifold that is often called the Swiss roll. Now, the Swiss roll actually doesn’t need a 3-D representation. So, what you would like to do is automatically unroll it. You see that here on the right-hand side, the dimensionality is reduced. So, you only have two dimensions here. This has been done automatically using a manifold learning technique or dimensionality reduction technique that is nonlinear. With these nonlinear methods, you can break down data sets into lower dimensionality. Now, this is useful because the smaller dimensionality is supposed to carry all the information that you need and you can now use this as a kind of representation.

当然,也有无监督的技术,您不需要任何标记的数据。 这将是接下来的两个视频的主题。 因此,让我们看一下无标签学习。 此处的一种典型应用是降维。 在这里,您有一个示例,其中数据位于高维空间中。 我们有一个3D空间。 实际上,我们只是向您展示此3-D空间中的一个切片。 您会看到数据已汇总,并且我们在此图像中通过相似的颜色标识了相似的点。 您会看到这种3-D歧管,通常称为瑞士卷。 现在,瑞士卷实际上不需要3D表示。 因此,您要做的是自动将其展开。 您会看到,在右侧,维数减小了。 因此,这里只有二维。 这是使用非线性的流形学习技术或降维技术自动完成的。 使用这些非线性方法,您可以将数据集分解为较低维度。 现在,这很有用,因为较小的维度应该可以承载您需要的所有信息,并且您现在可以将其用作一种表示形式。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

What we’ll also see in the next couple of videos is that you can use this for example as network initialization. You already see the first auto-encoder structure here. You train such a network with a bottleneck where you have a low dimensional representation. Later, you take this low-dimensional representation and repurpose it. This means that you essentially remove the right-hand part of the network and replace it with a different one. Here, we use it for classification, and again our example is classifying cats and dogs. So, you can already see that if we are able to do such a dimensionality reduction, preserve the original information in a low dimensional space, then we potentially have fewer weights that we have to work with to approach a classification task. By the way, this is very similar to what we have already discussed when talking about transfer learning techniques.

在接下来的两个视频中,我们还将看到您可以将其用作网络初始化。 您已经在这里看到了第一个自动编码器结构。 您会在低维表示的瓶颈处训练这样的网络。 以后,您将采用这种低维表示并将其重新调整用途。 这意味着您实际上要删除网络的右侧部分,然后将其替换为另一部分。 在这里,我们将其用于分类,同样,我们的示例是对猫和狗进行分类。 因此,您已经看到,如果能够减少维数,将原始信息保留在低维空间中,那么处理分类任务所需的权重就可能更少。 顺便说一句,这与我们在讨论迁移学习技术时已经讨论过的内容非常相似。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

You can also use this for clustering and you have already seen that. We have been using this technique in the chapter on visualization where we had this very nice dimensionality reduction and we zoomed in and looked over the different places here.

您也可以将其用于集群,并且您已经看到了。 我们在可视化一章中一直使用这种技术,在其中我们进行了很好的降维,然后放大并查看了此处的不同位置。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

You’ve seen that if you have a good learning method that will extract a good representation, then you can also use it to identify similar images in such a low dimensional space. Well, this can also be used for generative models. So here, the task is to generate realistic images. You can tackle for example missing data problems with this. This then leads into semi-supervised learning where you can also use this, for example, for augmentation. You can also use it for image-to-image translation which is also a very cool application. We will later see the so-called cycle GAN where you can really do a domain translation. You can also use this to simulate possible futures in reinforcement learning. So, we would have all kinds of interesting domains where we could apply these unsupervised techniques as well. So, here are some examples of data generation. You train with the left-hand side and then you generate on the right-hand side those images. This would be an appealing thing to do. You could generate images that look like real observations.

您已经看到,如果您拥有一种能够提取良好表示形式的良好学习方法,那么您也可以使用它来识别如此低维空间中的相似图像。 好吧,这也可以用于生成模型。 因此,这里的任务是生成逼真的图像。 您可以使用此方法解决例如丢失数据的问题。 然后,这将导致半监督学习,您也可以在其中使用它,例如进行扩充。 您也可以将其用于图像到图像的转换,这也是一个非常不错的应用程序。 稍后我们将看到所谓的GAN循环,您可以在其中真正进行域转换。 您还可以使用它来模拟强化学习中可能的未来。 因此,我们将拥有各种有趣的领域,我们也可以在这些领域应用这些无监督的技术。 因此,这是数据生成的一些示例。 您用左侧训练,然后在右侧生成这些图像。 这将是一件吸引人的事情。 您可以生成看起来像真实观察结果的图像。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

So today, we will talk about the restricted Boltzmann machines. As already indicated, they are historically important. But, honestly, nowadays they are not so commonly used anymore. They have been part of the big breakthroughs that we’ve seen earlier. For example, in Google dream. So, I think you should know about these techniques.

所以今天,我们将讨论受限的玻尔兹曼机。 如前所述,它们在历史上很重要。 但是,老实说,如今它们不再那么常用了。 它们是我们之前看到的重大突破的一部分。 例如,在Google梦中。 因此,我认为您应该了解这些技术。

Image for post
Dreams of MNIST. Image created using gifify. Source: YouTube
MNIST的梦想。 使用 gifify创建的 图像 。 资料来源: YouTube

Later, we’ll talk about autoencoders which are essentially an emerging technology and kind of similar to the restricted Boltzmann machines. You can use them in a feed-forward network context. You can use them for nonlinear dimensionality reduction and even extend this to generative models like the variational auto-encoders which is also a pretty cool trick. Lastly, we will talk about general adversarial networks which are currently probably the most widely used generative models. There are many applications of this very general concept. You can use it in image segmentation, reconstruction, semi-supervised learning, and many more.

稍后,我们将讨论自动编码器,它本质上是一种新兴技术,并且与受限的Boltzmann机器相似。 您可以在前馈网络环境中使用它们。 您可以将它们用于非线性降维,甚至可以将其扩展到生成模型,例如变分自动编码器,这也是一个很酷的技巧。 最后,我们将讨论通用对抗网络,该网络目前可能是使用最广泛的生成模型。 这个非常笼统的概念有很多应用。 您可以将其用于图像分割,重建,半监督学习等。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

But let’s first look at the historical perspective. Probably these historical things like restricted Boltzmann machines are not so important if you encounter an exam with me at some point. Still, I think you should know about this technique. Now, the idea is a very simple one. So, you start with two sets of nodes. One of them consists of visible units and the other one of the hidden units. They’re connected. So, you have the visible units v and they represent the observed data. Then, you have the hidden units that capture the dependencies. So they’re latent variables and they’re supposed to be binary. So they’re supposed to be zeros and ones. Now, what can we do with this bipartite graph?

但是,让我们先来看一下历史观点。 如果您在某个时候遇到我的考试,这些历史性的东西(例如受限的Boltzmann机器)可能并不那么重要。 不过,我认为您应该了解这种技术。 现在,这个想法很简单。 因此,您从两组节点开始。 其中一个由可见单位组成,另一个由隐藏单位组成。 他们已连接。 因此,您有可见单位v ,它们表示观察到的数据。 然后,您将获得捕获依赖项的隐藏单元。 因此,它们是潜在变量,应该是二进制的。 因此,它们应该是零和一。 现在,我们如何处理这个二部图?

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

Well, you can see that the restricted Boltzmann machine is based on an energy model with a joint probability function that is p(v, h). It’s defined in terms of an energy function and this energy function is used inside the probability. So, you have 1/Z which is a kind of normalization constant. Then, e to the power of -E(v, h). The energy function that we’re defining here E(v, h) is essentially an inner product of the bias with v another bias and inner product with h and then a weighted inner product of v and h that is weighted with the matrix W. So, you can see that the unknowns here essentially are b, c, and the matrix W. So, this probability density function is called the Boltzmann distribution. It’s closely related to the softmax function. Remember that this is not simply a fully connected layer, because it’s not feed-forward. So, you feed into the restricted Boltzmann machines, you determine the h, and from the h you can then produce v again. So, the hidden layer model the input layer in a stochastic manner and is trained unsupervised.

好吧,您可以看到受限的Boltzmann机器基于具有联合概率函数为p( vh )的能量模型。 它是根据能量函数定义的,并且该能量函数在概率内使用。 因此,您拥有1 / Z,这是一种归一化常数。 然后,e等于-E( vh )的幂。 我们在此处定义的能量函数E( vh )本质上是具有v的另一个偏差和具有h的内部积,然后是具有矩阵Wvh的加权内部积。 因此,您可以看到这里的未知数本质上是bc和矩阵W。 因此,该概率密度函数称为玻耳兹曼分布。 它与softmax函数密切相关。 请记住,这不是简单的完全连接的层,因为它不是前馈。 因此,您输入受限的Boltzmann机器,确定h ,然后从h可以再次产生v 。 因此,隐藏层以随机方式对输入层进行建模,并在无人监督的情况下进行训练。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

So let’s look into some details here. The visible and hidden units form this bipartite graph as I already mentioned. You could argue that our RBMs are Markov random fields with hidden variables. Then, we want to find W such that our probability is high for low energy states and vice-versa. The learning is based on gradient descent on the negative log-likelihood. So, we start with the log-likelihood and you can see there’s a small mistake on this slide. We are missing a log in the p(v, h. We already fixed that in the next line where we have the logarithm of 1/Z and the sum of the exponential functions. Now, we can use the definition of Z and expand it. This allows us then to write this multiplication as a second logarithmic term. Because it’s 1/Z it’s -log the definition of Z. This is the sum over v and h over the exponential function of -E(v, h). Now, if we look at the gradient, you can see that the full derivation is given in [5]. What you essentially get are two sums here. One is the sum over the p(h, v) times the negative partial differential of the energy function concerning the parameters minus the sum over v and h of the p(v, h) times the negative partial derivative of the energy function with respect to the parameters. Again, you can interpret those two terms as the expected value of the data and the expected value of the model. Generally, the expected value of the model is intractable, but you can approximate this with the so-called contrastive divergence.

因此,让我们在这里研究一些细节。 正如我已经提到的,可见和隐藏单元构成了该二部图。 您可能会说我们的RBM是带有隐藏变量的Markov随机字段。 然后,我们想要找到W,这样对于低能态我们的概率很高,反之亦然。 该学习基于负对数似然上的梯度下降。 因此,我们从对数可能性开始,您会发现这张幻灯片上有一个小错误。 我们在p( vh中缺少对数。我们已经在下一行固定对数为1 / Z和指数函数之和的对数中,现在,我们可以使用Z的定义并将其展开这允许我们然后写该乘法作为第二对数项。因为这是1 / Z是-log Z.此的定义是在vH以上-E(v,H)的指数函数之和。现在,如果我们看一下梯度,您会发现[5]中给出了全导数,您实际上得到的是两个和,一个是p( hv )上的和乘以负偏微分。与参数有关的能量函数减去p( vh )在vh上的总和乘以能量函数相对于参数的负偏导数,同样,您可以将这两个项解释为数据的期望值和模型的期望值通常,模型的期望值是很难理解的,但是您可以将其近似 与所谓的对比分歧。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

Now, contrastive divergence works the following way: You take any training example as v. Then, you set the binary states of the hidden units by computing the sigmoid function of the weighted sum over the vs plus the biases. So, this gives you essentially the probabilities of your hidden units. Then, you can run k Gibbs sampling steps where you sample the reconstruction v tilde by computing the probabilities of v subscript j =1 given h again by computing the sigmoid function over the weighted sum of h plus the biases. So, you’re using the hidden units that you have been computing in the second step. You can then use this to sample the reconstruction v tilde. This allows you again to resample h tilde. So, you run this for a couple of steps and if you did so, then you can actually compute the gradient updates. The gradient update for the matrix W is given by η times v h transpose minus v tilde h tilde transpose. The update for the bias is given as η times vv tilde and the update for the bias c is given as η times hh tilde. So this allows you also to update the weights. This way you can then start computing the appropriate weights and biases. So the more iterations of Gibbs sampling you run, the less biassed the estimate of the gradients will be. In practice, k is simply chosen as one.

现在,对比散度通过以下方式起作用:您将任何训练示例视为v 。 然后,通过在v s加上偏差上计算加权和的S型函数来设置隐藏单位的二进制状态。 因此,这实际上为您提供了隐藏单位的概率。 然后,可以游程k Gibbs抽样,其中再次通过计算过的H加的偏压的加权和的S形函数,通过计算v下标j = 1个给定的H的概率采样重建v波浪步骤。 因此,您将使用第二步中一直在计算的隐藏单位。 然后,您可以使用此采样重建v波浪。 这又可以让你重新取样^ h波浪。 因此,您需要执行几个步骤,如果这样做,则可以实际计算梯度更新。 矩阵W的梯度更新由η次v h转置减去v tilde h tilde转置给出。 用于偏置的更新被给定为η倍于V - v波浪号和用于偏置c中的更新被给定为η倍ħ - ħ波浪。 因此,这还允许您更新权重。 这样,您便可以开始计算适当的权重和偏差。 因此,您运行的Gibbs采样迭代次数越多,梯度估计的偏差就越小。 实际上,仅将k选择为1。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

You can expand on this into a deep belief network. The idea here is then that you stack layers on top again. The idea of deep learning is like layers on layers. So we need to go deeper and here we have one restricted Boltzmann machine on top another restricted Boltzmann machine. So, you can then use this to create really deep networks. One additional trick that you can use is that you use, for example, the last layer to fine-tune it for a classification task.

您可以在此基础上扩展为深入的信任网络。 然后,这里的想法是您再次在顶部堆叠图层。 深度学习的想法就像层层叠叠。 因此,我们需要更深入地研究,这里在另一台受限的Boltzmann机顶上有一个受限的Boltzmann机。 因此,您可以使用它来创建真正的深度网络。 您可以使用的另一种技巧是,例如,使用最后一层对分类任务进行微调。

Image for post
Deep believe networks in action. Image created using gifify. Source: YouTube
深信网络在行动。 使用 gifify创建的 图像 。 资料来源: YouTube

This is one of the first successful deep architectures as you see in [9]. This sparked the deep learning renaissance. Nowadays, RMBs are rarely used. So, deep belief networks are not that commonly used anymore.

如您在[9]中所见,这是第一个成功的深度架构。 这引发了深度学习的复兴。 如今,很少使用人民币。 因此,深度信任网络不再是常用的了。

Image for post
CC BY 4.0 from the 深度学习讲座中 Deep Learning Lecture. CC BY 4.0下的图像。

So, this is the reason why we talk next time about autoencoders. We will look then in the next couple of videos into more sophisticated methods, for example, at the generative adversarial networks. So, I hope you liked this video and if you liked it then I hope to see you in the next one. Goodbye!

因此,这就是我们下次讨论自动编码器的原因。 然后,在接下来的两个视频中,我们将研究更复杂的方法,例如在生成对抗网络中。 所以,我希望您喜欢这部影片,如果您喜欢它,那么我希望在下一部影片中见到您。 再见!

If you liked this post, you can find more essays here, more educational material on Machine Learning here, or have a look at our Deep LearningLecture. I would also appreciate a follow on YouTube, Twitter, Facebook, or LinkedIn in case you want to be informed about more essays, videos, and research in the future. This article is released under the Creative Commons 4.0 Attribution License and can be reprinted and modified if referenced. If you are interested in generating transcripts from video lectures try AutoBlog.

如果你喜欢这篇文章,你可以找到这里更多的文章 ,更多的教育材料,机器学习在这里 ,或看看我们的深入 学习 讲座 。 如果您希望将来了解更多文章,视频和研究信息,也欢迎关注YouTubeTwitterFacebookLinkedIn 。 本文是根据知识共享4.0署名许可发布的 ,如果引用,可以重新打印和修改。 如果您对从视频讲座中生成成绩单感兴趣,请尝试使用AutoBlog

链接 (Links)

Link — Variational Autoencoders: Link — NIPS 2016 GAN Tutorial of GoodfellowLink — How to train a GAN? Tips and tricks to make GANs work (careful, noteverything is true anymore!) Link - Ever wondered about how to name your GAN?

链接 —可变自动编码器: 链接 — Goodfellow的NIPS 2016 GAN教程链接 —如何训练GAN? 使GAN正常工作的提示和技巧(小心,什么都没了!) 链接 -是否想知道如何命名GAN?

翻译自: https://towardsdatascience.com/unsupervised-learning-part-1-c007f0c35669

无监督学习 k-means

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值