tensorflow API


http://blog.csdn.net/u014114990/article/details/51125776
多通道(比如RGB三通道)卷积过程:
一般介绍卷积过程好像是对 一个通道的图像进行卷积, 比如10个卷积核,得到10个feature map, 那么输入图像为RGB三个通道呢, 输出的个数依然是 卷积核的个数。
同一个卷积核在每一个通道上的卷积结果相加然后再取激活函数值得到的


tf.identity

import tensorflow as tf  
x = tf.Variable(0.0)  
x_plus = tf.assign_add(x, 1)  
with tf.control_dependencies([x_plus]):#依赖于x_plus操作  
    y = tf.identity(x)
init = tf.global_variables_initializer()  
with tf.Session() as session:  
    init.run()  
    for i in range(5):  
        print(y.eval())  输出1,2,3,4,5
    print(x.eval())#5

tf.group

When this op finishes, all ops in inputs have finished. This op has no output.

If you look at the graphdef, the c=tf.group(a, b) produces the same graph as

with tf.control_dependencies([a, b]):
    c = tf.no_op() 

There’s no specific order in which ops will run first, TensorFlow tries to execute operations
as soon as it can (i.e. in parallel).


tf.nn.softmax_cross_entropy_with_logits

self.loss = tf.nn.softmax_cross_entropy_with_logits(z, self.target)

假如z的shape是[batchSize, sampleDim], self.target的shape和z要相同,且target的每一行里只能有一个为1,表示真实的标签的位置;其他都为0。
则函数输出的shape为(batchSize,), 即loss里的每个元素代表一个训练样本的Cross-entropy值。
这个函数执行了softmax和entropy两个操作

example :

with tf.name_scope("output"):
    W = tf.get_variable(
            "W",
            shape=[num_filters_total, num_classes],
            initializer=tf.contrib.layers.xavier_initializer()
        )
    b = tf.Variable(tf.constant(0.1, shape=[num_classes]), name="b")
    l2_loss += tf.nn.l2_loss(W)
    l2_loss += tf.nn.l2_loss(b)
    #tf.nn.xw_plus_b is a convenience wrapper to perform the 
    #Wx + b matrix multiplication
    self.scores = tf.nn.xw_plus_b(self.h_drop, W, b, name="scores")
    self.predictions = tf.argmax(self.scores, 1, name="predictions")

# CalculateMean cross-entropy loss
with tf.name_scope("loss"):
    losses = tf.nn.softmax_cross_entropy_with_logits(self.scores, self.input_y)
    self.loss = tf.reduce_mean(losses) + l2_reg_lambda * l2_loss

self.scores的shape为[batch, num_classes],每一行的每个元素代表取指定输出类的概率,这里还没有归一化,也就是说一行里的所有值相加不等于1,要经过softmax操作后才等于1。
self.predictions是一个列表,shape为[batch,],记录每一行里的最大值所对应的index
input_y的shape为[batch,num_classes],每一行里只要一个1
losses的shape为一个列表[batch,],tf.reduce_mean(losses)为对列表求和取平均.
tf.nn.l2_loss(W): Computes half the L2 norm of a tensor without the sqrt:output = sum(W** 2) / 2. 也就是求W矩阵中的每个元素的2次方相加后除以2


tf.argmax

    x = tf.constant([[1,2,3,4],[4,3,2,1]])
    predictions = tf.argmax(x, 0, name="predictions")
    sess = tf.Session()
    result = sess.run(predictions)
    print(result)

输出:[1 1 0 0]
tf.argmax输出的是指定维度上最大值所对应的索引,也就是预测值对应的索引。

tf.expand_dims(Tensor, dim)

使张量在指定的位置增加一维,这一多出来的维度的size为1

import tensorflow as tf
from numpy import *
sess = tf.InteractiveSession()
labels = [1,2,3]
x = tf.expand_dims(labels, 1)
y=sess.run(x)
print(sess.run(x))
print(y.shape)
输出
[[1]
 [2]
 [3]]
(3, 1)


----------


import tensorflow as tf
from numpy import *
sess = tf.InteractiveSession()
labels = [[1,2,3],[1,2,3]]
x = tf.expand_dims(labels, 0)
y=sess.run(x)
print(sess.run(x))
print(y.shape)
输出:
[[[1 2 3]
  [1 2 3]]]
(1, 2, 3)


----------
import tensorflow as tf
from numpy import *
sess = tf.InteractiveSession()
labels = [[1,2,3],[1,2,3]]
x = tf.expand_dims(labels, 1)
y=sess.run(x)
print(sess.run(x))
print(y.shape)
输出:
[[[1 2 3]]

 [[1 2 3]]]
(2, 1, 3)


----------
import tensorflow as tf
from numpy import *
sess = tf.InteractiveSession()
labels = [[1,2,3],[1,2,3]]
x = tf.expand_dims(labels, -1)
y=sess.run(x)
print(sess.run(x))
print(y.shape)
输出:
[[[1]
  [2]
  [3]]

 [[1]
  [2]
  [3]]]
(2, 3, 1)

tf.matmul

tf.matmul(output, softmax_w)
output的shape为[batch_size, size]
softmax_w的shape为[size, vocab_size]
output的第二维的大小和softmax_w第一维的大小相同。
得出的结果的shape为[batch_size, vocab_size]

convolution padding/strides/output_width

参考卷积操作
Ignoring channels for the moment, and assume that the 4-D input has shape [batch, in_height, in_width, …] and the 4-D filter has shape [filter_height, filter_width, …], then the convolution ops are as follows:
first, according to the padding scheme chosen as ‘SAME’ or ‘VALID’, the output size and the padding pixels are computed.

1、For the ‘SAME’ padding
the output height and width are computed as:

#从这里可以看出,如果strider为1,则一张单输入通道单输出通道的图卷积前后,输入和输出的维度完全相同
#只有strider大于1时高度和宽度才会有变化
out_height = ceil(float(in_height) / float(strides[1]))
out_width  = ceil(float(in_width) / float(strides[2]))

and the padding on the top and left are computed as:

pad_along_height = ((out_height - 1) * strides[1] +
                    filter_height - in_height)
pad_along_width = ((out_width - 1) * strides[2] +
                   filter_width - in_width)
pad_top = pad_along_height / 2 #顶端padding数
pad_left = pad_along_width / 2

Finally, the padding on the top, bottom, left and right are:

pad_top = pad_along_height // 2
pad_bottom = pad_along_height - pad_top
pad_left = pad_along_width // 2
pad_right = pad_along_width - pad_left

Note that the division by 2 means that there might be cases when the padding on both sides (top vs bottom, right vs left) are off by one(意思是假如pad_along_height是一个奇数,则右边比左边多一个pad,下边比上边多一个padding). In this case, the bottom and right sides always get the one additional padded pixel. For example, when pad_along_height is 5, we pad 2 pixels at the top and 3 pixels at the bottom.

Note that this is different from existing libraries such as cuDNN and Caffe, which explicitly specify the number of padded pixels and always pad the same number of pixels on both sides.

2、 For the ‘VALID’ padding
卷积过程中不自动增加pad,所以使用前需要手动添加pad。the output height and width are computed as:

out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width  = ceil(float(in_width - filter_width + 1) / float(strides[2]))

and the padding values are always zero.

tf.nn.conv2d

在 [http://stackoverflow.com/questions/34619177/what-does-tf-nn-conv2d-do-in-tensorflow](conv)
有一个很好的解释

filter_shape = [filter_size, embedding_size, 1, num_filters]
W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
conv = tf.nn.conv2d(
                    self.embedded_chars_expanded,
                    W,
                    strides=[1, 1, 1, 1],
                    padding="VALID",
                    name="conv")
#b长度和conv最后一维的长度要相同,即和filter的个数相同
h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")

各参数的解释如下
1、self.embedded_chars_expanded是卷积网络的输入,是句子通过embeding矩阵后得到的,其shape为:
[batch, inputHeight, inputWidth, in_channels]

2、W为卷积map,in_channels为输入channel的数量,output_channels为卷积核的个数。W的shape为:[filter_height , filter_width, in_channels, output_channels],在文本处理时[句子的长度 , embding_size, in_channels, output_channels]。

计算时将w 变为[filter_height * filter_width * in_channels, output_channels],然后从输入中每次选取一个patch,这个patch里有filter_height * filter_width * in_channels个元素,将patch和w做element wise相乘再相加就得到最终的结果,每一个patch和一个 filter_height * filter_width * in_channels相乘得到一个输出channel的值,总共有output_channels个channel

3、strides=[1, 1, 1, 1]滑动窗口时在input的四个维度上的步长。一般strides[0]=strides[3]=1,why?

4、padding="VALID"表示对输入不填充

输出的shape为:
[batch,inputHeight-filter_height+1, inputWidth-filter_width+1, output_channels]

tf.nn.conv1d

tf.nn.conv1d(value, filters, stride, padding, use_cudnn_on_gpu=None, data_format=None, name=None)

Given an input tensor of shape [batch, in_width, in_channels] and a filter / kernel tensor of shape [filter_width, in_channels, out_channels], this op reshapes the arguments then pass them to conv2d to perform the equivalent convolution operation.

Internally, this op reshapes the input tensors and invokes tf.nn.conv2d. For example, if data is a tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels](相当于输入的高度为1,输入channel为in_channels), and the filter is reshaped to [1, filter_width, in_channels, out_channels](相当于filter的高度为1,输入channel为in_channels). The result is [batch, out_width, out_channels] and returned to the caller.

大体意思是将conv1d里的输入和filter的形状改变成适用conv2d函数的参数,然后调用conv2d来做真正的计算。这里每个卷积核的高度都为1


tf.nn.max_pool

pooled = tf.nn.max_pool(
                    h,
                    ksize=[1, sequence_length - filter_size + 1, 1, 1],
                    strides=[1, 1, 1, 1],
                    padding='VALID',
                    name="pool")
pooled_outputs.append(pooled)

各参数的解释:
1、h为输入,shape为[batch,height,width,channel]
2、ksize为MAX poll操作窗口在input各维度上的大小,一般第一和第四维都是1,第二维为窗口的高度,第三维为窗口的宽度
3、strides=[1, 1, 1, 1]滑动窗口时在input的四个维度上的步长
4、padding="VALID"表示对输入不填充

输出shape
[batch,inputHeight-filter_height+1, inputWidth-filter_width+1, output_channels]


tf.stack

下面两个要区分
tf.stack(values, axis=0, name=‘stack’) 按照指定的、新建的轴进行拼接
tf.concat(values, axis, name=‘concat’):按照指定的、已经存在的轴进行拼接
样例

t1 = [[1, 2, 3], [4, 5, 6]]
t2 = [[7, 8, 9], [10, 11, 12]]
tf.concat([t1, t2], 0)==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
tf.concat([t1, t2], 1) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
tf.stack([t1, t2], 0)  ==> [[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]
tf.stack([t1, t2], 1)  ==> [[[1, 2, 3], [7, 8, 9]], [[4, 5, 6], [10, 11, 12]]]
tf.stack([t1, t2], 2)  ==> [[[1, 7], [2, 8], [3, 9]], [[4, 10], [5, 11], [6, 12]]]

上面的结果读起来不太直观,我们从shape角度看一下就很容易明白了:

t1 = [[1, 2, 3], [4, 5, 6]]
t2 = [[7, 8, 9], [10, 11, 12]]
tf.concat([t1, t2], 0)  # [2,3] + [2,3] ==> [4, 3]
tf.concat([t1, t2], 1)  # [2,3] + [2,3] ==> [2, 6]
tf.stack([t1, t2], 0)   # [2,3] + [2,3] ==> [2*,2,3]
tf.stack([t1, t2], 1)   # [2,3] + [2,3] ==> [2,2*,3]
tf.stack([t1, t2], 2)   # [2,3] + [2,3] ==> [2,3,2*]

2* 表示新建的轴,如果是3个元素组成,则新建轴的大小为3。


tf.concat

self.h_pool = tf.concat(3, pooled_outputs)

pooled_outputs是一个列表,列表里的每个元素为一个四维矩阵,这个操作表示将列表里的元素沿着第3维方向(维度是从索引0开始)连起来得到一个四维矩阵。

在r1.0版本里有变化,将两个张量沿着某一个轴连接称为一个张量。
t1 = [[1, 2, 3], [4, 5, 6]]
t2 = [[7, 8, 9], [10, 11, 12]]
tf.concat([t1, t2], 0) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
tf.concat([t1, t2], 1) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]

可以用来串联由Bidirectional 网络在每个step里生成的forward和backward hidden state张量。举例如下:

encoder_outputs, encoder_states = \
    tf.nn.bidirectional_dynamic_rnn(
                        cell, cell, encoder_inputs_emb,
                        sequence_length=self.encoder_len, dtype=dtype)

返回值的解释如下

Bidirectional  return A tuple (outputs, output_states) where:
  outputs: A tuple (output_fw, output_bw) containing the forward and
    the backward rnn output `Tensor`.
    If time_major == False (default),
      output_fw will be a `Tensor` shaped:
      `[batch_size, max_time, cell_fw.output_size]`
      and output_bw will be a `Tensor` shaped:
      `[batch_size, max_time, cell_bw.output_size]`.
    If time_major == True,
      output_fw will be a `Tensor` shaped:
      `[max_time, batch_size, cell_fw.output_size]`
      and output_bw will be a `Tensor` shaped:
      `[max_time, batch_size, cell_bw.output_size]`.
    It returns a tuple instead of a single concatenated `Tensor`, unlike
    in the `bidirectional_rnn`. If the concatenated one is preferred,
    the forward and backward outputs can be concatenated as
    `tf.concat(outputs, 2)`.
  output_states: A tuple (output_state_fw, output_state_bw) containing
    the forward and the backward final states of bidirectional rnn.

encoder_states 是encoder后最后一个step的 hidden state, 是一个tuple。将最后一个step的forward和backward state连接起来,生成的就是shape为 [batch, 2*hiddenLen] 的张量,这里先将encoder_states分成两个张量,然后将两个张量沿着第二维方向连接起来

tf.concat(encoder_states, 1)

通过下面的操作将每一步的forward和back hidden连接起来

tf.concat(encoder_outputs, 2)

tf.slice

函数原型 tf.slice(inputs,begin,size,name=’’)
用来从input中切片,begin和size都是数组,切数组长度和input的维度数
输入begin = [a, b]时 表示第一维方向从index=a位置开始,第二维方向从index=b开始抽取。参数size表示从begin位置开始每一维上取多少个元素

import tensorflow as tf  
import numpy as np  
x=[[1,2,3],[4,5,6]]  

sess=tf.Session()  
begin_x=[1,0]        #第一个1,决定了从x的第二行[4,5,6]开始,第二个0,决定了从[4,5,6] 中的4开始抽取  
size_x=[1,2]         # 第一个1决定了,从第二行以起始位置抽取1行,也就是只抽取[4,5,6] 这一行,在这一行中从4开始抽取2个元素  
out=tf.slice(x,begin_x,size_x)  
print sess.run(out)  #  结果:[[4 5]] 

tf.reshape

self.h_pool_flat = tf.reshape(self.h_pool, [-1, num_filters_total])

假设self.h_pool为一个四维矩阵,shape为[batch,1,1,num_filters_total], 这个函数会得出一个[batch,num_filters_total]的矩阵。


tf.nn.dropout

with tf.name_scope("dropout"):
    self.h_drop = tf.nn.dropout(self.h_pool_flat, self.dropout_keep_prob)

self.h_pool_flat最后一维方向上每个节点保留下来的概率为self.dropout_keep_prob。


tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None)
参考:http://wuguangbin1230.blog.163.com/blog/static/61529835201722073925205/
return
Produces a slice of each Tensor in tensor_list.
Implemented using a Queue – a QueueRunner for the Queue is added to the current Graph’s QUEUE_RUNNER collection.
If the tensor in tensor_list has shape [N, a, b, …, z], then every output tensor will have shape [a, b, …, z].
tensor_list是一个列表,列表里每个元素的第一维长度要相同,每次从tensor_list里的每个元素里切片一部分数据出来。他是一个队列操作。


tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, name=None)
Produces the integers from 0 to limit-1 in a queue.
Returns:

A Queue with the output integers. A QueueRunner for the Queue is added to the current Graph’s QUEUE_RUNNER collection.
返回一个队列,队列的输出值为整数。这个队列的QueueRunner 被加入到当前计算图的QUEUE_RUNNER集合里。

def _calc_final_dist(self, vocab_dists, attn_dists):

      batch_nums = tf.range(0, limit=self._hps.batch_size) # shape (batch_size)
      batch_nums = tf.expand_dims(batch_nums, 1) # shape (batch_size, 1)
      attn_len = tf.shape(self._enc_batch_extend_vocab)[1] # number of states we attend over
      #复制一个tensor,第二个参数表示每个维度上的复制次数
      batch_nums = tf.tile(batch_nums, [1, attn_len]) # shape (batch_size, attn_len)
      #沿着某一个方向将一系列的tensor打包成一个新的tensor,新的tensor比原来列表里的每个tensor多一维。
      '''
		假设列表长度为3,列表里每个tensor为2*4的矩阵,
		则axis=0时得到的是3*2*4; axis=1时得到的是2*3*4,如果axis=2,则得到的是2*4*3
		的矩阵
	  '''
      indices = tf.stack( (batch_nums, self._enc_batch_extend_vocab), axis=2) # shape (batch_size, enc_t, 2)
      shape = [self._hps.batch_size, extended_vsize]
      # list length max_dec_steps (batch_size, extended_vsize)
      将copy_dist里的值按照indices的设置copy到shape里面
      attn_dists_projected = [tf.scatter_nd(indices, copy_dist, shape) for copy_dist in attn_dists] 

tf.scatter_nd的一个例子如下

copy_dist=[[1,2,3],[4,5,6]]
indices = [[[0, 0],[0 ,1],[0 ,0]],[[1 ,0],[1 ,1],[1 ,2]]]
shape=[2,5]
indice = tf.scatter_nd(indices, copy_dist, shape)

with tf.Session() as sess: 
  tf.global_variables_initializer().run()
  z = sess.run(indice)
  print(z)

indices和copy_dist的形状应该是一模一样的,按照indices顺序把copy_dist里的元素copy到shape里。其中indices的结构为:

[[[0 0] 
  [0 1] 代表把copy_dist的第0行第一个元素copy到shape的第0行第一个元素
  [0 0]] 代表把copy_dist的第0行第二个元素copy到shape的第0行第一个元素上,如果shape的这个位置原先有值,则要加上这个值

 [[1 0]
  [1 1]
  [1 2]]
]

输出结果为

[[4 2 0 0 0]
 [4 5 6 0 0]]

binary_crossentropy

在VAE求Loss的时候,有如下的例子
https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/

def vae_loss(y_true, y_pred):
    """ Calculate loss = reconstruction loss + KL loss for each data in minibatch """
    # E[log P(X|z)]
    recon = K.sum(K.binary_crossentropy(y_pred, y_true), axis=1)
    # D_KL(Q(z|X) || P(z|X)); calculate in closed form as both dist. are Gaussian
    kl = 0.5 * K.sum(K.exp(log_sigma) + K.square(mu) - 1. - log_sigma, axis=1)

    return recon + kl

假设输出层有10个节点,其值代表10个类别的伯努利概率的参数,而且类别之间是不互斥的,即一个样本可以有多个1,而不是只有一个1. y_true只能在0和1里取值。y_true和y_pred的shape必须是一样的。


##tf.multiply

Returns x * y element-wise. 注意不是矩阵乘法

x = tf.constant(5.0, shape=[5, 6])
w = tf.constant([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
xw = tf.multiply(x, w)
max_in_rows = tf.reduce_max(xw, 1)

sess = tf.Session()
print sess.run(xw)
# ==> [[0.0, 5.0, 10.0, 15.0, 20.0, 25.0],
#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],
#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],
#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],
#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0]]

print sess.run(max_in_rows)
# ==> [25.0, 25.0, 25.0, 25.0, 25.0]

但如果x和y都是二维矩阵时,x和y的形状必须相同,其实现的功能是两个矩阵里相同位置的乘积,得到的矩阵和x或y的形状相同。

tf.reduce_max & tf.equal

_actual_aspects_lens = tf.placeholder(tf.float64, [2,2],
		                        name='aspects_lens')
m = tf.reduce_max(_actual_aspects_lens,1,keep_dims=True)
n = tf.equal(_actual_aspects_lens,m)
init = tf.initialize_all_variables()
with tf.Session() as sess:
	sess.run(init)
	x ,y= sess.run([m,n], feed_dict={_actual_aspects_lens:[[1,2],[3,4]]})
	print(x)
	print(y)

结果

[[ 2.]
 [ 4.]]
[[False  True]
 [False  True]]

##tf.contrib.distributions.OneHotCategorical
The categorical distribution is parameterized by the log-probabilities of a set of classes. The difference between OneHotCategorical and Categorical distributions is that OneHotCategorical is a discrete distribution over one-hot bit vectors whereas Categorical is a discrete distribution over positive integers.
即每次求某个类别的概率时,输入的是一个onehot 数组;
Categorical distributions 里每次求某个类别的概率时,输入的是这个类别的离散值,是一个标量。

etc,Creates a 3-class distribution, with the 3rd class is most likely to be drawn.

p = [0.1, 0.4, 0.5]
dist = tf.contrib.distributions.OneHotCategorical(probs=p)
dist.prob([0,1,0])  # Shape []
samples = [[0,1,0], [1,0,0],[0,0,1]]
f= dist.prob(samples)  # Shape [2]
init = tf.initialize_all_variables()

with tf.Session() as sess:
	sess.run(init)
	print(sess.run(f))

输出

[ 0.40000001  0.09999999  0.5       ]

tf.contrib.distributions.RelaxedOneHotCategorical

The RelaxedOneHotCategorical is a distribution over random probability vectors, vectors of positive real values that sum to one, which continuously approximates a OneHotCategorical.

temperature = 0.5
p = [0.1, 0.5, 0.4]
dist = tf.contrib.distributions.RelaxedOneHotCategorical(temperature, probs=p)
f = dist.sample()
g=tf.reduce_sum(f)
init = tf.initialize_all_variables()
with tf.Session() as sess:
	x,y=sess.run([f,g])
	print(x)
	print(y)

输出

[ 0.09953836  0.8660416   0.03441994]
1.0

##cell.zero_state
用来初始化RNN的初始状态

    cell = tf.nn.rnn_cell.BasicRNNCell(state_size)
    init_state = cell.zero_state(batch_size, tf.float32)
    rnn_outputs, final_state = tf.nn.rnn(cell, rnn_inputs, initial_state=init_state)

LSTMStateTuple

LSTM的输出有两个,一个cell_output, 一个h。这个接口只用于LSTM类型的cell

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

yiqingyang2012

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值