3.Pytorch基础模块torch的API之Indexing,Slicing,Joining,Mutating Ops实例详解

0. torch

torch包含多维张量的数据结构,并定义了对这些张量的数学运算。

1. Tensors

👉1.Pytorch基础模块torch的API之Tensors实例详解

2. Creation Ops

👉2.Pytorch基础模块torch的API之Creation Ops实例详解

3. Indexing,Slicing,Joining,Mutating Ops

import torch
import numpy as np
3.1 torch.cat()
torch.cat(tensors, dim=0, *, out=None) → Tensor

cat是concatenate拼接的意思,这个函数就是拼接tensor

参数dim是指定拼接维度,0是按行拼接,1是按列拼接

注意:既然是张量拼接,对两个张量也是有要求的,

比如dim=0按行拼接时,需要两个张量列相等;dim=1按列拼接时,需要两个张量行相等

# 按行拼接,a,b 的列都是3
a = torch.ones(2, 3)
b = torch.tensor([[1.2, 1.6, 1.8], 
                  [2.4, 2.6, 2.8], 
                  [3, 8, 10]])
c = torch.cat((a, b), 0)
c
tensor([[ 1.0000,  1.0000,  1.0000],
        [ 1.0000,  1.0000,  1.0000],
        [ 1.2000,  1.6000,  1.8000],
        [ 2.4000,  2.6000,  2.8000],
        [ 3.0000,  8.0000, 10.0000]])
# 按行拼接,a,d 的行都是2
d = torch.tensor([[1.2, 1.6, 1.8, 2.0], 
                  [2.4, 2.6, 2.8, 3.0]])
e = torch.cat((a, d), 1)
e
tensor([[1.0000, 1.0000, 1.0000, 1.2000, 1.6000, 1.8000, 2.0000],
        [1.0000, 1.0000, 1.0000, 2.4000, 2.6000, 2.8000, 3.0000]])
3.2 torch.chunk()
torch.chunk(input, chunks, dim=0) → List of Tensors

将一个张量分解成特定数量的块。

如果给定维度dim的张量大小不能被chunks整除,最后一个chunk将变小。

a = torch.randn(4, 5)
# 沿行分为2块
torch.chunk(a, 2, dim = 0)
(tensor([[-0.5815,  1.3466,  1.8123, -0.2137,  1.0332],
         [ 0.2855, -0.2596, -0.1466,  1.8626, -1.5094]]),
 tensor([[-0.7512,  0.3702,  0.6753,  2.0602,  0.9390],
         [ 1.2926,  0.3564,  0.0266, -1.6133,  0.6239]]))
# 沿列分为3块, 除不尽,最后一块张量只有一列
torch.chunk(a, 3, dim = 1)
(tensor([[-0.5815,  1.3466],
         [ 0.2855, -0.2596],
         [-0.7512,  0.3702],
         [ 1.2926,  0.3564]]),
 tensor([[ 1.8123, -0.2137],
         [-0.1466,  1.8626],
         [ 0.6753,  2.0602],
         [ 0.0266, -1.6133]]),
 tensor([[ 1.0332],
         [-1.5094],
         [ 0.9390],
         [ 0.6239]]))
3.3 torch.dstack()
torch.dstack(tensors, *, out=None) → Tensor

按顺序(沿第三维)叠加张量

注意:要叠加的tensor前两个维度一定要相等

# 可以叠加多个tensor
a = torch.rand(2, 3, 3)
b = torch.rand(2, 3, 3)
c = torch.rand(2, 3, 3)
d = torch.dstack((a, b, c))
print(d)
print(d.size())
tensor([[[0.6743, 0.6997, 0.5677, 0.9193, 0.4914, 0.4055, 0.0591, 0.0654,
          0.2638],
         [0.1526, 0.2773, 0.7118, 0.5617, 0.3533, 0.0911, 0.4904, 0.1828,
          0.8082],
         [0.7754, 0.7911, 0.2999, 0.6562, 0.2948, 0.9407, 0.8955, 0.2537,
          0.7910]],

        [[0.9034, 0.5289, 0.1748, 0.8016, 0.0661, 0.8339, 0.5822, 0.5961,
          0.8841],
         [0.3712, 0.3345, 0.5839, 0.5781, 0.1134, 0.0874, 0.4147, 0.2677,
          0.4564],
         [0.2621, 0.5943, 0.7056, 0.8602, 0.9650, 0.1913, 0.1075, 0.8469,
          0.9479]]])
torch.Size([2, 3, 9])
# 如果要叠加的tensor维度小于三维,先使用torch.atleast_3d()转换成3维
a = torch.tensor([[1, 2, 3],[2, 4, 6]])
b = torch.tensor([[4, 5, 6],[7, 8, 9]])
print(a.size())
print(b.size())

c = torch.dstack((a, b))
print(c)
print(c.size())
torch.Size([2, 3])
torch.Size([2, 3])
tensor([[[1, 4],
         [2, 5],
         [3, 6]],

        [[2, 7],
         [4, 8],
         [6, 9]]])
torch.Size([2, 3, 2])
3.4 torch.gather()
torch.gather(input, dim, index, *, sparse_grad=False, out=None) → Tensor

沿给定轴 dim ,按输入索引张量 index 指定位置的值进行收集组成新的张量。

dim指定哪一个维度

index指定具体索引位置

a = torch.randint(0, 30, (2, 3, 5))
a
tensor([[[ 0, 15,  7, 28, 16],
         [12, 22, 23,  4,  4],
         [ 6,  3, 14, 22, 13]],

        [[23,  4, 20, 10,  4],
         [22,  9, 27,  5, 22],
         [24,  3,  9, 11, 15]]])
# 三维张量(2, 3, 5),dim = 1时,index的值可以取 0, 1, 2; 
index = torch.LongTensor([[[0, 1, 2, 2, 0],
                           [0, 0, 0, 0, 0],
                           [1, 1, 1, 1, 1]],
                          [[1, 0, 2, 1, 2],
                           [0, 0, 0, 0, 0],
                           [2, 2, 2, 2, 2]]])
b = torch.gather(a, 1, index)
b
tensor([[[ 0, 22, 14, 22, 16],
         [ 0, 15,  7, 28, 16],
         [12, 22, 23,  4,  4]],

        [[22,  4,  9,  5, 15],
         [23,  4, 20, 10,  4],
         [24,  3,  9, 11, 15]]])

我还是老方法理解,三维张量[[[]]],dim=0是第一层(最外层),dim=1是第二层,dim=3是第三层(最内层)

dim=1时,按第二层[[ 0, 15, 7, 28, 16],[12, 22, 23, 4, 4],[ 6, 3, 14, 22, 13]]

index的[0, 1, 2, 2, 0]就是对应的[ 0, 22, 14, 22, 16]

# 三维张量(2, 3, 5),dim = 2时,index的值也是可以取 0, 1, 2,3,4; 
index = torch.LongTensor([[[0, 2, 1, 3, 4],
                           [0, 0, 0, 0, 0],
                           [4, 4, 4, 4, 4]],
                          [[4, 0, 3, 1, 2],
                           [0, 0, 0, 0, 0],
                           [2, 2, 2, 2, 2]]])
c = torch.gather(a, 2, index)
c
tensor([[[ 0,  7, 15, 28, 16],
         [12, 12, 12, 12, 12],
         [13, 13, 13, 13, 13]],

        [[ 4, 23, 10,  4, 20],
         [22, 22, 22, 22, 22],
         [ 9,  9,  9,  9,  9]]])

dim=2时,按第三层(最内层)[ 0, 15, 7, 28, 16]

index的[0, 2, 1, 3, 4]就是对应的[0, 7, 15, 28, 16]

# 三维张量(2, 3, 5),dim = 0时,index的值也是可以取 0, 1; 
index = torch.LongTensor([[[0, 1, 1, 0, 1],
                           [0, 0, 0, 0, 0],
                           [0, 0, 0, 1, 1]],
                          [[1, 0, 0, 1, 1],
                           [0, 0, 0, 0, 0],
                           [1, 1, 1, 1, 1]]])
d = torch.gather(a, 0, index)
d
tensor([[[ 0,  4, 20, 28,  4],
         [12, 22, 23,  4,  4],
         [ 6,  3, 14, 11, 15]],

        [[23, 15,  7, 10,  4],
         [12, 22, 23,  4,  4],
         [24,  3,  9, 11, 15]]])

dim=0时,按第一层(最外层),即整个张量a, a的上半部分对应的是index中的0,下半部分对应的是index中的1

index中的[0, 1, 1, 0, 1]对应的 [上半部分中的0,下半部分的4,下半部分的20,上半部分的28,下半部分的4]

3.5 torch.hstack()
torch.hstack(tensors, *, out=None) → Tensor

水平方向上(沿第二维)按顺序堆叠张量。

类比3.3torch.dstach()

# 一维时,等价于沿着第一个张量水平拼接第二个张量
a = torch.tensor([1, 2, 3])
b = torch.tensor([4, 5, 6])
c = torch.hstack((a, b))
print(c)
print(c.size())
tensor([1, 2, 3, 4, 5, 6])
torch.Size([6])
# 二维时,沿着第二维水平拼接张量
a = torch.tensor([[1], [2], [3]])
b = torch.tensor([[4], [5], [6]])
print(a.size())
c = torch.hstack((a, b))
print(c)
print(c.size())
torch.Size([3, 1])
tensor([[1, 4],
        [2, 5],
        [3, 6]])
torch.Size([3, 2])
# 大于二维时,也是沿着第二维水平拼接张量
a = torch.rand(2, 3, 4)
b = torch.rand(2, 3, 4)
print(a)
print(b)
c = torch.hstack((a, b))
print(c)
print(c.size()) # 输出size应该是(2, 6, 4)
tensor([[[0.9993, 0.3971, 0.3522, 0.6960],
         [0.2972, 0.5581, 0.3707, 0.9967],
         [0.1729, 0.1846, 0.4583, 0.1432]],

        [[0.1392, 0.4144, 0.6231, 0.1775],
         [0.3527, 0.2495, 0.0369, 0.3696],
         [0.0309, 0.1674, 0.9390, 0.6351]]])
tensor([[[0.7380, 0.7130, 0.6342, 0.0902],
         [0.6269, 0.4198, 0.2047, 0.4107],
         [0.7155, 0.6780, 0.1810, 0.7113]],

        [[0.1136, 0.7194, 0.6350, 0.6444],
         [0.6555, 0.4313, 0.1446, 0.3732],
         [0.7456, 0.0401, 0.3066, 0.2646]]])
tensor([[[0.9993, 0.3971, 0.3522, 0.6960],
         [0.2972, 0.5581, 0.3707, 0.9967],
         [0.1729, 0.1846, 0.4583, 0.1432],
         [0.7380, 0.7130, 0.6342, 0.0902],
         [0.6269, 0.4198, 0.2047, 0.4107],
         [0.7155, 0.6780, 0.1810, 0.7113]],

        [[0.1392, 0.4144, 0.6231, 0.1775],
         [0.3527, 0.2495, 0.0369, 0.3696],
         [0.0309, 0.1674, 0.9390, 0.6351],
         [0.1136, 0.7194, 0.6350, 0.6444],
         [0.6555, 0.4313, 0.1446, 0.3732],
         [0.7456, 0.0401, 0.3066, 0.2646]]])
torch.Size([2, 6, 4])
3.6 torch.index_select()
torch.index_select(input, dim, index, *, out=None) → Tensor

按照指定维度(dim)和指定行或列(index)从输入张量中重新生成一个张量。

index参数也是一个一维张量,指定具体的行和列

a = torch.rand(3, 4)
print(a)
index = torch.tensor([0, 2]) # 0指定第一行或第一列,2指定第三行或第三列
tensor([[0.5075, 0.1276, 0.5497, 0.7082],
        [0.0381, 0.5187, 0.4642, 0.5474],
        [0.6278, 0.9411, 0.7611, 0.8547]])
# dim = 0时,按行选取第1行和第三行
torch.index_select(a, 0, index)
tensor([[0.5075, 0.1276, 0.5497, 0.7082],
        [0.6278, 0.9411, 0.7611, 0.8547]])
# dim = 1时,按行选取第1列和第三列
torch.index_select(a, 1, index)
tensor([[0.5075, 0.5497],
        [0.0381, 0.4642],
        [0.6278, 0.7611]])
3.7 torch.masked_select()
torch.masked_select(input, mask, *, out=None) → Tensor

根据布尔掩码(布尔掩码是BoolTensor)对输入张量进行索引返回一个一维新张量。

# 对应bool值是True留下,对应bool值是False舍弃
x = torch.randn(3, 4)
print(x)
mask = x.ge(0.5) # torch.ge()和x中的每个值比较,大于0.5的返回True,小于的返回False
print(mask)
torch.masked_select(x, mask)
tensor([[ 0.1202,  1.0083,  1.1095,  0.7824],
        [-0.0164, -1.8309,  1.1025, -0.2821],
        [ 0.2242, -2.3140,  1.6496, -0.5039]])
tensor([[False,  True,  True,  True],
        [False, False,  True, False],
        [False, False,  True, False]])





tensor([1.0083, 1.1095, 0.7824, 1.1025, 1.6496])
3.8 torch.movedim()
torch.movedim(input, source, destination) → Tensor

在input多维张量中,指定原始维数和目标维数进行交换返回一个新张量。

可同时交换多个维数

source参数指定原张量交换维数,destination参数指定目标交换维数

# 交换一个维数
t = torch.randn(3, 2, 1)
print(t)
s = torch.movedim(t, 1, 0).shape
print(s)
torch.movedim(t, 1, 0)
tensor([[[ 0.4433],
         [-0.5050]],

        [[ 0.4186],
         [-1.3052]],

        [[-0.1917],
         [ 0.1454]]])
torch.Size([2, 3, 1])





tensor([[[ 0.4433],
         [ 0.4186],
         [-0.1917]],

        [[-0.5050],
         [-1.3052],
         [ 0.1454]]])
# 交换多个维数
s = torch.movedim(t, (1, 2), (0, 1)).shape
print(s)
torch.movedim(t, (1, 2), (0, 1))
torch.Size([2, 1, 3])





tensor([[[ 0.4433,  0.4186, -0.1917]],

        [[-0.5050, -1.3052,  0.1454]]])
3.9 torch.narrow()
torch.narrow(input, dim, start, length) → Tensor

从输入张量中缩小张量,返回一个新张量

dim参数指定维度,0指行,1指列

start开始的行或列,length指定长度,共几行或几列

# (x, 0, 0, 2) 按行从第一行开始到第二行结束缩小x张量
x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
torch.narrow(x, 0, 0, 2)
tensor([[1, 2, 3],
        [4, 5, 6]])
# (x, 1, 1, 2) 按列从第一列开始到第二列结束缩小x张量
torch.narrow(x, 1, 1, 2)
tensor([[2, 3],
        [5, 6],
        [8, 9]])
3.10 torch.nonzero()
torch.nonzero(input, *, out=None, as_tuple=False) → LongTensor or tuple of LongTensors

返回输入张量中非零元素的索引

# 0, 1, 2, 4是非零元素
torch.nonzero(torch.tensor([1, 1, 1, 0, 1]))
tensor([[0],
        [1],
        [2],
        [4]])
# 多维时,返回[行,列]的索引形式
# [0, 0]指非零元素0.6
torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
                            [0.0, 0.4, 0.0, 0.0],
                            [0.0, 0.0, 1.2, 0.0],
                            [0.0, 0.0, 0.0, -0.4]]))
tensor([[0, 0],
        [1, 1],
        [2, 2],
        [3, 3]])
# 参数as_tuple默认是False, 如果设置为True,多维时就是分别返回行的索引和列的索引
# 返回的[0, 1, 2, 3]时非零元素的行索引,[0, 1, 3, 3]是非零元素的列索引
torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
                            [0.0, 0.4, 0.0, 0.0],
                            [0.0, 0.0, 0.0, 1.0],
                            [0.0, 0.0, 0.0,-0.4]]), as_tuple=True)
(tensor([0, 1, 2, 3]), tensor([0, 1, 3, 3]))
3.11 torch.reshape()
torch.reshape(input, shape) → Tensor

这个还是经常用的,返回一个张量,该张量具有相同的数据和作为输入的元素数量,但具有指定的形状。

a = torch.rand(3, 4)
print(a)
tensor([[0.2954, 0.2388, 0.2867, 0.2606],
        [0.0425, 0.2325, 0.8777, 0.1897],
        [0.0761, 0.9029, 0.9126, 0.0192]])
b = torch.reshape(a, (4, 3))
print(b)
tensor([[0.2954, 0.2388, 0.2867],
        [0.2606, 0.0425, 0.2325],
        [0.8777, 0.1897, 0.0761],
        [0.9029, 0.9126, 0.0192]])
# 参数设置-1,时,返回一维张量
c = torch.tensor([[[0, 1], [2, 3]], 
                  [[4, 5], [6, 7]]])
torch.reshape(c, (-1,))
tensor([0, 1, 2, 3, 4, 5, 6, 7])
3.12 torch.split()
torch.split(tensor, split_size_or_sections, dim=0)

把张量分成块。每一块都是原始张量的一个视图。

a = torch.arange(10).reshape(5, 2)
a
tensor([[0, 1],
        [2, 3],
        [4, 5],
        [6, 7],
        [8, 9]])
# int 分块
torch.split(a, 2)
(tensor([[0, 1],
         [2, 3]]),
 tensor([[4, 5],
         [6, 7]]),
 tensor([[8, 9]]))
# list 分块
torch.split(a, (1, 4))
(tensor([[0, 1]]),
 tensor([[2, 3],
         [4, 5],
         [6, 7],
         [8, 9]]))
3.13 torch.squeeze()
torch.squeeze(input, dim=None, *, out=None) → Tensor

默认返回一个去掉所有维数为1的维度后的张量;如果指定参数dim,可以去除指定的维数为1的维度

对应的函数是torch.unsqueeze()可实现给指定位置增加维数为1的维度

x = torch.zeros(2, 1, 2, 1, 2)
x.size()
torch.Size([2, 1, 2, 1, 2])
# 不指定dim
y = torch.squeeze(x)
y.size()
torch.Size([2, 2, 2])
# 指定dim,如果指定的不是维数为1,去除无效
y = torch.squeeze(x, 1)
y.size()
torch.Size([2, 2, 1, 2])
3.14 torch.stack()
torch.stack(tensors, dim=0, *, out=None) → Tensor

沿一个新维度对输入张量序列进行连接,序列中所有张量应为相同形状;

stack 函数返回的结果会新增一个维度,而stack()函数指定的dim参数,就是新增维度的(下标)位置。

a = torch.rand(1, 2, 3)
print(a, a.size())
b = torch.rand(1, 2, 3)
print(b, b.size())
c = torch.stack([a, b])
print(c, c.size())
# 可以在指定维度进行拼接
d = torch.stack([a, b], 1)
print(d, d.size())
tensor([[[0.8866, 0.2823, 0.1084],
         [0.7682, 0.4529, 0.2383]]]) torch.Size([1, 2, 3])
tensor([[[0.4213, 0.3009, 0.1707],
         [0.1252, 0.5748, 0.4927]]]) torch.Size([1, 2, 3])
tensor([[[[0.8866, 0.2823, 0.1084],
          [0.7682, 0.4529, 0.2383]]],


        [[[0.4213, 0.3009, 0.1707],
          [0.1252, 0.5748, 0.4927]]]]) torch.Size([2, 1, 2, 3])
tensor([[[[0.8866, 0.2823, 0.1084],
          [0.7682, 0.4529, 0.2383]],

         [[0.4213, 0.3009, 0.1707],
          [0.1252, 0.5748, 0.4927]]]]) torch.Size([1, 2, 2, 3])
3.15 torch.t()
torch.t(input) → Tensor

简单来说就是对输入张量进行转置

如果输入的0维或1维张量,保持原样输出

a = torch.tensor([1, 2, 3])
torch.t(a)
tensor([1, 2, 3])
b = torch.rand(2, 3)
print(b, b.size())
c = torch.t(b)
print(c, c.size())
tensor([[0.6441, 0.1929, 0.8621],
        [0.8585, 0.8823, 0.6143]]) torch.Size([2, 3])
tensor([[0.6441, 0.8585],
        [0.1929, 0.8823],
        [0.8621, 0.6143]]) torch.Size([3, 2])
3.16 torch.take()
torch.take(input, index) → Tensor

输入张量被看作是一个一维张量,按给定index索引取值,返回一个新的张量。

a = torch.tensor([[1, 2, 3],
                  [4, 5, 6]])
index = torch.tensor([0, 2, 5])
# 取出第1,3,6的数重新组成一个张量
torch.take(a, index)
tensor([1, 3, 6])
3.17 torch.transpose()
torch.transpose(input, dim0, dim1) → Tensor

对输入张量进行转置

注意:得到的张量与输入张量共享它的底层存储,所以改变一个张量的内容就会改变另一个张量的内容。

x = torch.rand(2, 3)
print(x, x.size())
y = torch.transpose(x, 0, 1)
print(y, y.size())
tensor([[0.9390, 0.0851, 0.1133],
        [0.0745, 0.1473, 0.1422]]) torch.Size([2, 3])
tensor([[0.9390, 0.0745],
        [0.0851, 0.1473],
        [0.1133, 0.1422]]) torch.Size([3, 2])
3.18 torch.unbind()
torch.unbind(input, dim=0) → seq

移除张量的维数,返回原来张量元素的元组

torch.unbind(torch.tensor([[1, 2, 3],
                           [4, 5, 6],
                           [7, 8, 9]]))
(tensor([1, 2, 3]), tensor([4, 5, 6]), tensor([7, 8, 9]))
3.19 torch.unsqueeze()
torch.unsqueeze(input, dim) → Tensor

与3.13torch.squeeze()对比,可实现给指定位置增加维数为1的维度

x = torch.tensor([1, 2, 3, 4, 5])
torch.unsqueeze(x, 0)
tensor([[1, 2, 3, 4, 5]])
torch.unsqueeze(x, 1)
tensor([[1],
        [2],
        [3],
        [4],
        [5]])
3.20 torch.vstack()
torch.vstack(tensors, *, out=None) → Tensor

按顺序(按行)堆叠张量。

a = torch.tensor([1, 2, 3])
b = torch.tensor([4, 5, 6])
torch.vstack((a, b))
tensor([[1, 2, 3],
        [4, 5, 6]])
a = torch.tensor([[1], [2], [3]])
b = torch.tensor([[4], [5], [6]])
torch.vstack((a, b))
tensor([[1],
        [2],
        [3],
        [4],
        [5],
        [6]])
3.21 torch.where()
torch.where(condition, x, y) → Tensor

根据条件返回从x或y中选择的元素张量。

condition参数是判断条件

x是符合条件的设置值

y是不符合条件的设置值

x = torch.randn(2, 2, dtype = torch.double)
x
tensor([[-0.2385, -0.0491],
        [ 1.5753,  0.0219]], dtype=torch.float64)
# 张量中不满足大于0的设置成0
torch.where(x > 0, x, 0.)
tensor([[0.0000, 0.0000],
        [1.5753, 0.0219]], dtype=torch.float64)

reference:




须知少时凌云志,曾许人间第一流。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

ZPILOTE

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值