一、张量的拼接—cat()与stack()函数
1. 函数原型:
torch.cat(tensors, => 张量序列
dim = 0, => 要拼接的维度
out = None)
torch.stack(tensors, => 张量序列
dim = 0, => 要拼接的维度
out = None)
2. 区别:
cat() :在原维度上拼接
stack():创建新的维度
二、张量的切分—chunk()和split()函数
1. 函数原型:
torch.chunk(input, => 输入的张量
chunks, => 份数
dim = 0) => 维度
功能:将张量按照维度dim进行平均切分成chunks份。
返回值:张量列表。
注:如果不能平均切分,最后一份会小于其他张量。
torch.split(input, => 输入的张量
split_size_or_sections, => 为int时,表示每份的长度;
为list时,按照list切分该维张量
di = 0) => 维度
2. 区别:
split()函数比chunk()函数更加灵活,要根据具体需要选择合适函数。
三、张量的索引—index_select()和mask_select()函数
1. 函数原型
torch.index_select(input, => 待索引的张量
dim, => 待索引的维度
index, => 待索引数据的序号
out = None)
返回值:一维或高维张量
torch.masked_select(input, => 待索引的张量
mask, => 与input同形状的布尔类型张量
out = None)
返回值 : 一维张量
2. 区别
返回值的维度一般不同。
3. 代码举例
import torch
a = torch.randint(1, 10, [3, 5])
idx = torch.tensor([0, 2], dtype=torch.long)
b = torch.index_select(a, dim=1, index=idx)
mask = a.ge(5)
c = torch.masked_select(a, mask=mask)
print("a = {0} \nshape = {1} \n======".format(a, a.shape))
print("b = {0} \nshape = {1} \n======".format(b, b.shape))
print("c = {0} \nshape = {1} \n======".format(c, c.shape))
运行结果
a = tensor([[1, 2, 6, 9, 3],
[9, 6, 9, 7, 1],
[1, 9, 8, 1, 8]])
shape = torch.Size([3, 5])
======
b = tensor([[1, 6],
[9, 9],
[1, 8]])
shape = torch.Size([3, 2])
======
c = tensor([6, 9, 9, 6, 9, 7, 9, 8, 8])
shape = torch.Size([9])
======
四、张量的变换—reshape() 和 transpose() 和 squeeze()函数
1. 函数原型
torch.reshape(input, => 待变换的张量
shape) => 新张量的形状
注:当张量在内存中连续时,新张量与input共享数据内存
torch.transpose(input, => 待变换的张量
dim0, => 待变换的维度
dim1) => 待变换的维度
注:torch,t() : 用于二位张量转置; 对于矩阵而言,等价于torch.transpose(input, 0, 1)
torch.squeeze(input, => 待变换的维度
dim = None, => 为None时,移除所有长度为1的轴;
为指定维度时,当该轴为1时,移除该轴。
out = None)
2. 应用举例
(1) torch.reshape()
import torch
a = torch.randperm(10)
reshape = torch.reshape(a, (2, 5))
print("a = {0}\nres = {1}".format(a, reshape))
a[0] = 999
print("a = {0}, 内存地址 = {1}\nrsp = {2}, 内存地址 = {3}".format(a, id(a), reshape, id(reshape)))
运行结果
a = tensor([0, 6, 2, 5, 7, 1, 4, 3, 9, 8])
res = tensor([[0, 6, 2, 5, 7],
[1, 4, 3, 9, 8]])
a = tensor([999, 6, 2, 5, 7, 1, 4, 3, 9, 8]), 内存地址 = 2501396614904
rsp = tensor([[999, 6, 2, 5, 7],
[ 1, 4, 3, 9, 8]]), 内存地址 = 2501396612744
(2) torch.transpose()
import torch
a = torch.randint(1, 10, (2, 2, 3))
b = torch.transpose(a, dim0=1, dim1=2)
print("a.shape = {0}\nb.shape = {1}".format(a.shape, b.shape))
运行结果
a.shape = torch.Size([2, 2, 3])
b.shape = torch.Size([2, 3, 2])
(3) torch.squeeze()
import torch
a = torch.rand((1, 2, 3, 1, 2))
b = torch.squeeze(a)
c = torch.squeeze(a, dim=0)
d = torch.squeeze(a, dim=1)
print("a.shape = {0}\nb.shape = {1}\nc.shape = {2}\nd.shape = {3}".format(a.shape, b.shape, c.shape, d.shape))
运行结果
a.shape = torch.Size([1, 2, 3, 1, 2])
b.shape = torch.Size([2, 3, 2])
c.shape = torch.Size([2, 3, 1, 2])
d.shape = torch.Size([1, 2, 3, 1, 2])
五、张量的数学运算
1. 函数:
加减乘除:add addcdiv addcmul sub div mul
三角函数:abs acos cosh cos asin atan atan2
对数、指数、幂函数:log log10 log2 exp pow
2. 部分原型:
torch.add(input, => 第一个张量
alpha = 1, => 乘项因子
other, => 第二个张量
out = None)
输出:output = input + alpha*other
torch.addcmul(input,
value = 1,
tensor1,
tensor2,
out = None)
输出:output = input + value*(tensor1/tensor2)