torch中一些函数的使用

目录

torch.rand()/torch.randn()/torch.randn_like()

torch.tensor()/torch.zeros()/torch.zeros_like()/torch.ones()/torch.ones_like()/torch.full()/torch.full_like()

torch.where()

torch.sum/max/min()

torch.argmax()/torch.argmin()

torch.mean()

torch.squeeze/unsqueeze()

torch.scatter_()、torch.scatter()

torch.nn.functional.softmax()

nn.Unfold()/nn.Fold()


torch.rand()/torch.randn()/torch.randn_like()

1、torch.rand(size, names=None, dtype=None, layout=None, device=None, pin_memory=False, requires_grid=False) -> Tensor

生成满足0-1上的均匀分布

size:指定大小(尺寸),可以是一个整数,也可以是一个元组
dtype(可选)- 输出张量所需的数据类型。默认为None
layout(可选)- 输出张量所需的内存布局。默认为None
device(可选)- 输出张量所需的设备。默认为None
pin_memory(可选) - 输出向量是否把数据存放在锁页内存中
requires_grad(可选)- 输出张量是否应该在反向传播期间计算其梯度。默认为False

2、torch.randn(size, names=None, dtype=None, layout=None, device=None, pin_memory=False, requires_grid=False) -> Tensor

生成满足标准正态分布(有正有负)

size:指定大小(尺寸),可以是一个整数,也可以是一个元组

3、torch.randn_like(input_tensor, dtype=None, layout=None, device=None, requires_grad=False) -> Tensor

生成一个与input_tensor大小一样的随机Tensor向量

4、torch.randint(low=0, high, size, *, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

生成一个范围在[low, high)之间的均匀分布的整数Tensor,形状由size定义,默认生成torch.int64类型的数据,若要其他类型,可通过指定dtype = torch.float32


 

 代码案例:

import torch

data1 = torch.rand(3, 4)
print("data1:\n", data1)

data2 = torch.randn(3, 4)
print("data2:\n", data2)

data3 = torch.randn_like(data2)
print("data3:\n", data3)

data4 = torch.randint(5, 10, (3, 4))
print("data4:\n", data4)

"""
data1:
 tensor([[0.3252, 0.8275, 0.2902, 0.4311],
        [0.0143, 0.8656, 0.0265, 0.0099],
        [0.6170, 0.2901, 0.9514, 0.6942]])
data2:
 tensor([[ 1.7677e+00, -1.0082e-01,  9.4609e-01, -7.2862e-01],
        [-2.6762e+00, -1.3837e+00,  6.7949e-01, -6.0854e-01],
        [-1.3366e+00, -1.5987e-01, -2.1372e-03,  6.9989e-01]])
data3:
 tensor([[-0.5793, -0.7656, -0.2034,  0.3599],
        [ 0.0625,  1.7307, -0.5347, -1.1164],
        [ 0.3808,  0.0704,  0.5028,  0.4839]])
data4:
 tensor([[5, 8, 7, 6],
        [5, 6, 5, 8],
        [9, 7, 5, 8]])
"""

torch.tensor()/torch.zeros()/torch.zeros_like()/torch.ones()/torch.ones_like()/torch.full()/torch.full_like()

1、torch.tensor(data, dtype=None, device=None, requires_grad=False) -> Tensor

将其余类型的数组转变为tensor向量

requires_grad:若指定为True,则会记录梯度,同上

2、torch.zeros(*size, out=None, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor

生成size大小的 0 tensor向量,size可以是tuple可以是list

3、torch.zeros_like(input, memory_format=None, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor

生成input大小的 0 tensor向量,size可以是tuple可以是list

4、torch.ones(*size, out=None, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor

生成size大小的 1 tensor向量,size可以是tuple可以是list

5、torch.oness_like(input, memory_format=None, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor

生成input大小的 1 tensor向量,size可以是tuple可以是list

6、torch.full(size, fill_value, *, out=None, dtype=None, layout=None, device=None, requires_grad=False) → Tensor

size:输出张量的大小
fill_value: 填入输出Tensor的值

7、torch.full_like(input, fill_value, \*, dtype=None, layout=None, device=None, requires_grad=False, memory_format=None) → Tensor

输出与input大小相同的张量,value为fill_value

 代码案例:

import torch
import numpy as np

data = [[3,3,4,5],[4,4,4,6]]
print("data-type : ", type(data))
data_tensor = torch.tensor(data)
print("data-tensor:\n", data_tensor)

data0 = torch.zeros((3, 4))
print("data0:\n", data0)

data0_like = torch.zeros_like(data0)
print("data0_like:\n", data0_like)

data1 = torch.ones((3,4))
print("data1:\n", data1)

data1_like = torch.ones_like(data1)
print("data1_like:\n", data1_like)

data_full = torch.full((3,4), 5)
print("data_full:\n", data_full)

data_full_like = torch.full_like(data_full, 99)
print("data_full_like:\n", data_full_like)

"""
data-type :  <class 'list'>
data-tensor:
 tensor([[3, 3, 4, 5],
        [4, 4, 4, 6]])
data0:
 tensor([[0., 0., 0., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 0.]])
data0_like:
 tensor([[0., 0., 0., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 0.]])
data1:
 tensor([[1., 1., 1., 1.],
        [1., 1., 1., 1.],
        [1., 1., 1., 1.]])
data1_like:
 tensor([[1., 1., 1., 1.],
        [1., 1., 1., 1.],
        [1., 1., 1., 1.]])
data_full:
 tensor([[5, 5, 5, 5],
        [5, 5, 5, 5],
        [5, 5, 5, 5]])
data_full_like:
 tensor([[99, 99, 99, 99],
        [99, 99, 99, 99],
        [99, 99, 99, 99]])
"""

torch.where()

torch.where(condition, x, y)

  • condition 是一个布尔类型的张量,表示条件。如果 condition 中的元素为 True,则选择 x 中对应位置的元素;如果 condition 中的元素为 False,则选择 y 中对应位置的元素。
  • xy 是两个形状相同的张量,表示在满足或不满足条件时需要选择的元素。

代码案例:

import torch

# 定义两个张量
x = torch.tensor([1, 2, 3, 4])
y = torch.tensor([10, 20, 30, 40])

# 定义一个条件张量
condition = torch.tensor([True, False, True, False])

# 根据条件选择元素
result = torch.where(condition, x, y)

print(result)

# result:tensor([ 1, 20,  3, 40])

torch.sum/max/min()

  • torch.sum():计算张量中所有元素的总和。
  • torch.max():返回张量中的最大值及其索引。
  • torch.min():返回张量中的最小值及其索引。

torch.sum() 用于计算张量中所有元素的总和。可以指定沿着哪个维度进行求和,也可以不指定,这样将对整个张量进行求和。

1、torch.sum(input, dtype=None)
2、torch.sum(input, list: dim, bool: keepdim=False, dtype=None) → Tensor
 
input:输入一个tensor
dim:要求和的维度,可以是一个列表
keepdim:求和之后这个dim的元素个数为1,所以要被去掉,如果要保留这个维度,则应当keepdim=True

代码案例:

import torch

# 定义一个张量
tensor = torch.tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])

# 计算所有元素的总和
total_sum = torch.sum(tensor)
sum_1 = torch.sum(tensor, -1)
sum_1_true = torch.sum(tensor, -1, keepdim=True)
sum_2 = torch.sum(tensor, 0)
sum_3 = torch.sum(tensor, 1)

print("Total sum:", total_sum.item())  # 输出总和
print("sum_1:", sum_1)
print("sum_1_true:", sum_1_true)
print("sum_2:", sum_2)
print("sum_3:", sum_3)

"""
Total sum: 36
sum_1: tensor([[ 3,  7],
        [11, 15]])
sum_1_true: tensor([[[ 3],
         [ 7]],

        [[11],
         [15]]])
sum_2: tensor([[ 6,  8],
        [10, 12]])
sum_3: tensor([[ 4,  6],
        [12, 14]])
"""

torch.max/min() 返回张量中的最大/小值及其索引。可以指定沿着哪个维度进行计算最大/小值,也可以不指定,这样将对整个张量进行计算。

1、torch.max/min(input) → Tensor
2、torch.max/min(input, dim, keepdim=False, *, out=None) -> (Tensor, LongTensor)

input:输入一个tensor
dim:要求和的维度,可以是一个列表
keepdim:求和之后这个dim的元素个数为1,所以要被去掉,如果要保留这个维度,则应当keepdim=True


第二个函数返回由最大/小值以及最大/小值处的索引组成元组(max/min,max/min_indices)

上述两种函数形式本质区别:如果未指出dim,则返回整个数组的最大值,不返回索引。如果指出了dim,则在指定的维度上搜索最大值,返回最大值以及索引。

代码案例:

import torch

# 定义一个张量
tensor = torch.tensor([[1, 2], [3, 4]])

# 计算整个张量中的最小值及其索引
result_1 = torch.min(tensor)
result_1_keep = torch.min(tensor, 0, True)
result_2 = torch.min(tensor, 0)

print("result_1:", result_1)
print("result_1_keep:", result_1_keep)
print("result_2:", result_2)

"""
result_1: tensor(1)
result_1_keep: torch.return_types.min(
values=tensor([[1, 2]]),
indices=tensor([[0, 0]]))
result_2: torch.return_types.min(
values=tensor([1, 2]),
indices=tensor([0, 0]))
"""

torch.argmax()/torch.argmin()

torch.argmax(input) -> Tensor

无论input有多少维度,都会reshape为一个一维向量,然后找到最大值的索引

torch.argmax(input, dim, keepdim=False) -> Tensor

返回其他维度在这个维度上面张量最大值的索引

keepdim: 是否保持维度不变

torch.argmin()同torch.argmax()

 代码案例:

import torch

input = torch.randn(3, 4)
print("input:\n", input)
output1 = torch.argmax(input)
print("output1:\n", output1)

output2_0 = torch.argmax(input, dim=0)
print("output2_0:\n", output2_0)

output2_1 = torch.argmax(input, dim=1)
print("output2_1:\n", output2_1)

output3 = torch.argmax(input, dim=1, keepdim=True)
print("output3:\n", output3)

"""
input:
 tensor([[ 0.5610, -1.1299,  1.6538, -1.0580],
        [ 1.0968, -0.5567,  1.8204,  0.6700],
        [-0.5692,  0.3709, -0.1377, -0.5414]])
output1:
 tensor(6)
output2_0:
 tensor([1, 2, 1, 1])
output2_1:
 tensor([2, 2, 1])
output3:
 tensor([[2],
        [2],
        [1]])
"""

torch.mean()

1、torch.mean(Tensor) -> Tensor

    函数返回了一个输入张量Tensor中所有元素的平均值,返回值同样是tensor类型。

2、torch.mean(input, dim, keepdim=False,*,out=None) -> Tensor

    返回了输入input(输入是tensor类型)的特定维度dim上的均值

    如果dim为一个tuple,则函数沿着元组中指定的每个维度计算平均

    如果keepdim设置为 True,则保持输出张量中的维度数目与输入张量相同。默认为False

 代码案例:

import torch

# 创建一个张量
x = torch.tensor([[[1.0, 2.0],
                   [3.0, 4.0]],
                  [[5.0, 6.0],
                   [7.0, 8.0]]])

mean_all = torch.mean(x)
mean_tuple = torch.mean(x, dim=(1, 2))
mean_tuple_keepdim = torch.mean(x,dim=(1, 2), keepdim=True)
print("mean_all:", mean_all)
print("mean_tuple:", mean_tuple)
print("mean_tuple_keepdim:", mean_tuple_keepdim)

"""
mean_all: tensor(4.5000)
mean_tuple: tensor([2.5000, 6.5000])
mean_tuple_keepdim: tensor([[[2.5000]],

        [[6.5000]]])
"""

torch.squeeze/unsqueeze()

unsqueeze(dim) 函数会在指定的维度上增加一个尺寸为 1 的维度。

squeeze(dim) 函数会压缩指定的维度 dim 上尺寸为 1 的维度

torch.squeeze(input, dim=None, out=None) 

torch.unsqueeze(input, dim) → Tensor

代码案例:

import torch

tensor = torch.randn((2, 1, 2, 1, 1))
print("tensor_size:", tensor.shape)

# 当压缩维度大小为2的维度时,压缩不成功
print("压缩维度大小为2:", tensor.squeeze(0).shape)

# 当压缩维度大小为1的维度时,压缩成功
print("压缩维度大小为1:", tensor.squeeze(1).shape)

print("增加维度:", tensor.unsqueeze(0).shape)

# tensor_size: torch.Size([2, 1, 2, 1, 1])
# 压缩维度大小为2: torch.Size([2, 1, 2, 1, 1])
# 压缩维度大小为1: torch.Size([2, 2, 1, 1])
# 增加维度: torch.Size([1, 2, 1, 2, 1, 1])

torch.scatter_()、torch.scatter()

 Pytorch基础 - 8. scatter() / scatter_() 函数_.scatter_()-CSDN博客

torch.nn.functional.softmax()

F.sofrmax(x,dim)作用:根据不同的dim规则来做归一化操作

x指的是输入的张量,dim指的是归一化的维度

代码案例:

import torch
import torch.nn.functional as F

# 创建一个张量
x = torch.tensor([[[1.0, 2.0],
                   [3.0, 4.0]],
                  [[5.0, 6.0],
                   [7.0, 8.0]]])
print("x:", x)
print("softmax:", F.softmax(x))
print("softmax_0:", F.softmax(x, 0))
print("softmax_1:", F.softmax(x, 1))

"""
x: tensor([[[1., 2.],
         [3., 4.]],

        [[5., 6.],
         [7., 8.]]])
softmax: tensor([[[0.0180, 0.0180],
         [0.0180, 0.0180]],

        [[0.9820, 0.9820],
         [0.9820, 0.9820]]])
softmax_0: tensor([[[0.0180, 0.0180],
         [0.0180, 0.0180]],

        [[0.9820, 0.9820],
         [0.9820, 0.9820]]])
softmax_1: tensor([[[0.1192, 0.1192],
         [0.8808, 0.8808]],

        [[0.1192, 0.1192],
         [0.8808, 0.8808]]])
"""

nn.Unfold()/nn.Fold()

参考:Pytroch nn.Unfold() 与 nn.Fold()图码详解_pytorch unfold-CSDN博客

nn.Unfold()

在各滑动窗中按行展开(行向量化),然后转置成列向量, im2col 的批量形式

nn.Fold() 是 nn.Unfold() 函数的逆操作。 (参数相同、滑动窗口没有重叠的情况下,可以完全恢复【真互逆】。滑动窗口有重叠情况下不能恢复到Unfold的输入)

需要注意的是,如果滑动窗口有重叠,那么重叠部分相加【倍数关系】。同时,如果原来的图像不够划分的话就会舍去。在恢复时就会以 0 填充。

代码展示:

import torch.nn as nn
import torch

# 滑动窗口有重叠
unfold = nn.Unfold(kernel_size=(2, 3))
input = torch.randn(2, 5, 3, 4)
print(f"input: {input.shape}\n", input)
output = unfold(input)
print(f"output: {output.shape}\n", output)

# 滑动窗口有重叠情况,对应部分会叠加,不能完全恢复成原始模样
# 具体可见参考文章
fold = nn.Fold(output_size=(3, 4),kernel_size=(2, 3))
res = fold(output)
print(f"res:{res.shape}\n", res)

"""
input: torch.Size([2, 5, 3, 4])
 tensor([[[[ 0.0239,  0.6415,  0.8102,  1.7236],
          [-1.6290, -2.0365,  1.0533,  1.2346],
          [-0.2893, -0.6783, -0.9246, -0.0135]],

         [[ 0.7539,  0.3616,  0.1991,  0.7393],
          [-1.7206, -0.5839, -2.7429, -0.5863],
          [-3.4757,  2.1349,  0.4376,  0.1333]],

         [[ 0.4966,  0.6637, -1.2238, -0.5610],
          [ 0.2959, -0.1720, -0.9315, -1.1274],
          [ 1.1523, -0.5507, -1.5905,  0.4466]],

         [[-0.4267, -1.9897,  0.2291, -1.0678],
          [ 1.7600, -1.0514, -0.5408, -2.6351],
          [-1.1718,  1.4751,  1.4312,  1.4435]],

         [[-0.6652, -1.3365, -0.8197,  2.2232],
          [-0.0697,  0.5550, -0.0673,  1.2622],
          [-1.6710, -0.0534, -0.6028, -0.0198]]],


        [[[-2.7230,  0.2984,  0.6950,  0.1691],
          [ 0.6742,  0.1174,  1.3179,  0.0394],
          [-0.2155, -1.9185,  0.0396,  1.4025]],

         [[ 0.2779,  0.6386, -0.9694,  0.5684],
          [-0.3495,  1.7848,  1.1872,  0.6770],
          [ 0.1883,  0.4069, -0.4572,  0.4487]],

         [[-0.5936, -0.0288,  1.2662,  1.2256],
          [-0.6191, -0.7665,  0.3442,  1.6532],
          [ 0.0925, -0.3383, -0.9031, -1.1401]],

         [[-1.2441, -0.6194, -0.2511, -0.1361],
          [ 0.2182, -0.5965,  0.3446, -0.3277],
          [ 0.5272,  0.4675,  0.9016, -0.1000]],

         [[-0.5134, -1.4385,  1.2815,  0.9284],
          [-1.7550,  2.0624, -0.1498, -0.5115],
          [-1.6451, -0.4464, -0.1084,  0.3052]]]])
output: torch.Size([2, 30, 4])
 tensor([[[ 0.0239,  0.6415, -1.6290, -2.0365],
         [ 0.6415,  0.8102, -2.0365,  1.0533],
         [ 0.8102,  1.7236,  1.0533,  1.2346],
         [-1.6290, -2.0365, -0.2893, -0.6783],
         [-2.0365,  1.0533, -0.6783, -0.9246],
         [ 1.0533,  1.2346, -0.9246, -0.0135],
         [ 0.7539,  0.3616, -1.7206, -0.5839],
         [ 0.3616,  0.1991, -0.5839, -2.7429],
         [ 0.1991,  0.7393, -2.7429, -0.5863],
         [-1.7206, -0.5839, -3.4757,  2.1349],
         [-0.5839, -2.7429,  2.1349,  0.4376],
         [-2.7429, -0.5863,  0.4376,  0.1333],
         [ 0.4966,  0.6637,  0.2959, -0.1720],
         [ 0.6637, -1.2238, -0.1720, -0.9315],
         [-1.2238, -0.5610, -0.9315, -1.1274],
         [ 0.2959, -0.1720,  1.1523, -0.5507],
         [-0.1720, -0.9315, -0.5507, -1.5905],
         [-0.9315, -1.1274, -1.5905,  0.4466],
         [-0.4267, -1.9897,  1.7600, -1.0514],
         [-1.9897,  0.2291, -1.0514, -0.5408],
         [ 0.2291, -1.0678, -0.5408, -2.6351],
         [ 1.7600, -1.0514, -1.1718,  1.4751],
         [-1.0514, -0.5408,  1.4751,  1.4312],
         [-0.5408, -2.6351,  1.4312,  1.4435],
         [-0.6652, -1.3365, -0.0697,  0.5550],
         [-1.3365, -0.8197,  0.5550, -0.0673],
         [-0.8197,  2.2232, -0.0673,  1.2622],
         [-0.0697,  0.5550, -1.6710, -0.0534],
         [ 0.5550, -0.0673, -0.0534, -0.6028],
         [-0.0673,  1.2622, -0.6028, -0.0198]],

        [[-2.7230,  0.2984,  0.6742,  0.1174],
         [ 0.2984,  0.6950,  0.1174,  1.3179],
         [ 0.6950,  0.1691,  1.3179,  0.0394],
         [ 0.6742,  0.1174, -0.2155, -1.9185],
         [ 0.1174,  1.3179, -1.9185,  0.0396],
         [ 1.3179,  0.0394,  0.0396,  1.4025],
         [ 0.2779,  0.6386, -0.3495,  1.7848],
         [ 0.6386, -0.9694,  1.7848,  1.1872],
         [-0.9694,  0.5684,  1.1872,  0.6770],
         [-0.3495,  1.7848,  0.1883,  0.4069],
         [ 1.7848,  1.1872,  0.4069, -0.4572],
         [ 1.1872,  0.6770, -0.4572,  0.4487],
         [-0.5936, -0.0288, -0.6191, -0.7665],
         [-0.0288,  1.2662, -0.7665,  0.3442],
         [ 1.2662,  1.2256,  0.3442,  1.6532],
         [-0.6191, -0.7665,  0.0925, -0.3383],
         [-0.7665,  0.3442, -0.3383, -0.9031],
         [ 0.3442,  1.6532, -0.9031, -1.1401],
         [-1.2441, -0.6194,  0.2182, -0.5965],
         [-0.6194, -0.2511, -0.5965,  0.3446],
         [-0.2511, -0.1361,  0.3446, -0.3277],
         [ 0.2182, -0.5965,  0.5272,  0.4675],
         [-0.5965,  0.3446,  0.4675,  0.9016],
         [ 0.3446, -0.3277,  0.9016, -0.1000],
         [-0.5134, -1.4385, -1.7550,  2.0624],
         [-1.4385,  1.2815,  2.0624, -0.1498],
         [ 1.2815,  0.9284, -0.1498, -0.5115],
         [-1.7550,  2.0624, -1.6451, -0.4464],
         [ 2.0624, -0.1498, -0.4464, -0.1084],
         [-0.1498, -0.5115, -0.1084,  0.3052]]])
res:torch.Size([2, 5, 3, 4])
 tensor([[[[  0.0239,   1.2830,   1.6204,   1.7236],
          [ -3.2580,  -8.1461,   4.2133,   2.4692],
          [ -0.2893,  -1.3565,  -1.8492,  -0.0135]],

         [[  0.7539,   0.7231,   0.3982,   0.7393],
          [ -3.4412,  -2.3357, -10.9717,  -1.1726],
          [ -3.4757,   4.2697,   0.8752,   0.1333]],

         [[  0.4966,   1.3274,  -2.4476,  -0.5610],
          [  0.5917,  -0.6880,  -3.7260,  -2.2548],
          [  1.1523,  -1.1014,  -3.1809,   0.4466]],

         [[ -0.4267,  -3.9794,   0.4583,  -1.0678],
          [  3.5201,  -4.2058,  -2.1630,  -5.2701],
          [ -1.1718,   2.9501,   2.8624,   1.4435]],

         [[ -0.6652,  -2.6730,  -1.6394,   2.2232],
          [ -0.1394,   2.2200,  -0.2691,   2.5244],
          [ -1.6710,  -0.1067,  -1.2056,  -0.0198]]],


        [[[ -2.7230,   0.5968,   1.3900,   0.1691],
          [  1.3485,   0.4695,   5.2716,   0.0788],
          [ -0.2155,  -3.8370,   0.0792,   1.4025]],

         [[  0.2779,   1.2772,  -1.9387,   0.5684],
          [ -0.6990,   7.1394,   4.7488,   1.3541],
          [  0.1883,   0.8137,  -0.9144,   0.4487]],

         [[ -0.5936,  -0.0576,   2.5323,   1.2256],
          [ -1.2381,  -3.0661,   1.3770,   3.3064],
          [  0.0925,  -0.6766,  -1.8062,  -1.1401]],

         [[ -1.2441,  -1.2387,  -0.5022,  -0.1361],
          [  0.4364,  -2.3858,   1.3785,  -0.6554],
          [  0.5272,   0.9350,   1.8032,  -0.1000]],

         [[ -0.5134,  -2.8770,   2.5630,   0.9284],
          [ -3.5101,   8.2497,  -0.5991,  -1.0230],
          [ -1.6451,  -0.8927,  -0.2168,   0.3052]]]])
"""

  • 9
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值