pytorch tensor张量维度转换(tensor维度转换)

# view()    转换维度
# reshape() 转换维度
# permute() 坐标系变换
# squeeze()/unsqueeze() 降维/升维
# expand()   扩张张量
# narraw()   缩小张量
# resize_()  重设尺寸
# repeat(), unfold() 重复张量
# cat(), stack()     拼接张量

 

一. tensor.view(n1,n2,...,ni)

转换前后张量中的元素个数不变。view()中若存在某一维的维度是-1,则表示该维的维度根据总元素个数和其他维度尺寸自适应调整。注意,view()中最多只能有一个维度的维数设置成-1。

在卷积神经网络中,经常会在全连接层用到view进行张量的维度拉伸:

dst_t= src_t.view([src_t.size()[0], -1])

假设输入特征是B*C*H*W的4维张量,其中B表示batchsize,C表示特征通道数,H和W表示特征的高和宽,在将特征送入全连接层之前,会用.view将转换为B*(C*H*W)的2维张量,即保持batch不变,但将每个特征转换为一维向量。

# tensor.view
tensor_01 = (torch.rand([2, 3, 4]) * 10).int()
print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size())

# 输出
# tensor_01:
# tensor([[[1, 5, 2, 7],
#         [2, 0, 7, 1],
#         [0, 6, 7, 9]],
#
#        [[5, 1, 7, 2],
#         [9, 0, 8, 3],
#         [7, 3, 3, 5]]], dtype=torch.int32) 
# tensor size: torch.Size([2, 3, 4])


# 将tensor_01转换为2*3*2*2的张量
tensor_02 = tensor_01.view([2, 3, -1, 2])
print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size())

# 输出
# tensor_02:
# tensor([[[[1, 5],
#          [2, 7]],
#
#         [[2, 0],
#          [7, 1]],
#
#         [[0, 6],
#          [7, 9]]],
#
#
#        [[[5, 1],
#          [7, 2]],
#
#         [[9, 0],
#          [8, 3]],
#
#         [[7, 3],
#          [3, 5]]]], dtype=torch.int32) 
# tensor size: torch.Size([2, 3, 2, 2])


# 将tensor_01转换为2*12的张量
tensor_03 = tensor_01.view([tensor_01.size()[0], -1])
print('\ntensor_03:\n', tensor_03, '\ntensor size:', tensor_03.size())

# 输出
# tensor_03:
# tensor([[1, 5, 2, 7, 2, 0, 7, 1, 0, 6, 7, 9],
#        [5, 1, 7, 2, 9, 0, 8, 3, 7, 3, 3, 5]], dtype=torch.int32) 
# tensor size: torch.Size([2, 12])

 

二. tensor.reshape(n1,n2,...,ni)

与view使用方法相同

tensor_04 = tensor_01.reshape([tensor_01.size()[0], -1])
print('\ntensor_04:\n', tensor_04, '\ntensor size:', tensor_04.size())

# 输出结果
# tensor_04:
# tensor([[1, 5, 2, 7, 2, 0, 7, 1, 0, 6, 7, 9],
#        [5, 1, 7, 2, 9, 0, 8, 3, 7, 3, 3, 5]], dtype=torch.int32) 
# tensor size: torch.Size([2, 12])

 

三.tensor.squeeze()和tensor.unsqueeze()

1.tensor.squeeze() 降维

(1)若squeeze()括号内为空,则将张量中所有维度为1的维数进行压缩,如将2*1*3*1的张量降维到2*3维;若维度中无1维的维数,则保持源维度不变,如将2*3*4维的张量进行squeeze,则转换后维度不会变。

(2)若squeeze(idx),则将张量中对应的第idx维de的维度进行压缩,如2*1*3*1的张量做squeeze(1),则会降维到2*3*1维的张量;若第idx维度的维数不为1,则squeeze后维度不会变化。

2.tensor.unsqueeze(idx)升维

在第idx维进行升维,将tensor由原本的维度n,升维至n+1维。如张量的维度维2*3,经unsqueeze(1)后,变为2*1*3维度的张量。

# tensor.squeeze/unsqueeze
tensor_01 = torch.arange(1, 19).reshape(1, 2, 1, 9)
print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size())

# 输出
# tensor_01:
# tensor([[[[ 1,  2,  3,  4,  5,  6,  7,  8,  9]],
#
#         [[10, 11, 12, 13, 14, 15, 16, 17, 18]]]]) 
# tensor size: torch.Size([1, 2, 1, 9])

tensor_02 = tensor_01.squeeze(0)
print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size())

# 输出
# tensor_02:
# tensor([[[ 1,  2,  3,  4,  5,  6,  7,  8,  9]],
#
#        [[10, 11, 12, 13, 14, 15, 16, 17, 18]]]) 
# tensor size: torch.Size([2, 1, 9])

tensor_03 = tensor_01.squeeze(1)
print('\ntensor_03:\n', tensor_03, '\ntensor size:', tensor_03.size())

# 输出
# tensor_03:
# tensor([[[[ 1,  2,  3,  4,  5,  6,  7,  8,  9]],
#
#         [[10, 11, 12, 13, 14, 15, 16, 17, 18]]]]) 
# tensor size: torch.Size([1, 2, 1, 9])

tensor_04 = tensor_01.squeeze()
print('\ntensor_04:\n', tensor_04, '\ntensor size:', tensor_04.size())

# 输出
# tensor_04:
# tensor([[ 1,  2,  3,  4,  5,  6,  7,  8,  9],
#        [10, 11, 12, 13, 14, 15, 16, 17, 18]]) 
# tensor size: torch.Size([2, 9])

tensor_05 = tensor_04.view([2, 3, -1]).unsqueeze(2)
print('\ntensor_05:\n', tensor_05, '\ntensor size:', tensor_05.size())

# 输出
# tensor_05:
# tensor([[[[ 1,  2,  3]],
#
#         [[ 4,  5,  6]],
#
#         [[ 7,  8,  9]]],
#
#
#        [[[10, 11, 12]],
#
#         [[13, 14, 15]],
#
#         [[16, 17, 18]]]]) 
# tensor size: torch.Size([2, 3, 1, 3])

 

4.tensor.permute()

坐标系转换,即矩阵转置,使用方法与numpy array的transpose相同。permute()括号内的sh是深数字指的是各维度的索引值。permute是深度学习中经常需要使用的技巧,一般的会将BCHW的特征张量,通过转置转化为BHWC的特征张量,即将特征深度转换到最后一个维度,通过调用tensor.permute(0, 2, 3, 1)实现。

permute和view/reshape虽然都能将张量转化为特定的维度,但原理完全不同,注意区分。view和reshape处理后,张量中元素顺序都不会有变化,而permute转置后元素的排列会发生变化,因为坐标系变化了。

# tensor.permute
tensor_01 = (torch.rand([2, 3, 2, 4]) * 10).int()
print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size())

# 输出
# tensor_01:
#  tensor([[[[3, 4, 6, 2],
#           [1, 9, 8, 0]],
# 
#          [[4, 9, 4, 2],
#           [4, 2, 1, 0]],
# 
#          [[7, 2, 9, 5],
#           [5, 1, 9, 2]]],
# 
# 
#         [[[6, 0, 8, 8],
#           [3, 4, 8, 1]],
# 
#          [[7, 6, 4, 5],
#           [1, 4, 9, 7]],
# 
#          [[5, 7, 9, 8],
#           [6, 5, 2, 4]]]], dtype=torch.int32) 
# tensor size: torch.Size([2, 3, 2, 4])

tensor_02 = tensor_01.permute([0, 2, 3, 1])  # [2, 4, 4, 3]
print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size())

# 输出
# tensor_02:
#  tensor([[[[3, 4, 7],
#           [4, 9, 2],
#           [6, 4, 9],
#           [2, 2, 5]],
#
#          [[1, 4, 5],
#           [9, 2, 1],
#           [8, 1, 9],
#           [0, 0, 2]]],
#
#
#         [[[6, 7, 5],
#           [0, 6, 7],
#           [8, 4, 9],
#           [8, 5, 8]],
#
#          [[3, 1, 6],
#           [4, 4, 5],
#           [8, 9, 2],
#           [1, 7, 4]]]], dtype=torch.int32)
# tensor size: torch.Size([2, 2, 4, 3])

 

5.torch.cat(a,b,dim)

在第dim维度进行张量拼接,要注意维度保持一致。

假设a为h1*w1的二维张量,b为h2*w2的二维张量,torch.cat(a,b,0)表示在第一维进行拼接,即在列方向拼接,所以w1和w2必须相等。torch.cat(a,b,1)表示在第二维进行拼接,即在行方向拼接,所以h1和h2必须相等。

假设a为c1*h1*w1的二维张量,b为c2*h2*w2的二维张量,torch.cat(a,b,0)表示在第一维进行拼接,即在特征的通道维度进行拼接,其他维度必须保持一致,即w1=w2,h1=h2。torch.cat(a,b,1)表示在第二维进行拼接,即在列方向拼接,必须保证w1=w2,c1=c2;torch.cat(a,b,2)表示在第三维进行拼接,即在行方向拼接,必须保证h1=h2,c1=c2;

# torch.cat
tensor_01 = (torch.randn(2, 3) * 10).int()
print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size())

# 输出
# tensor_01:
#  tensor([[ 1, -8, -2],
#         [ 2, 10,  3]], dtype=torch.int32)
# tensor size: torch.Size([2, 3])


tensor_02 = torch.cat((tensor_01, torch.IntTensor([[0, 0, 0], [0, 0, 0], [0, 0, 0]])), 0) # 列方向拼接
print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size())

# 输出
# tensor_02:
#  tensor([[ 1, -8, -2],
#         [ 2, 10,  3],
#         [ 0,  0,  0],
#         [ 0,  0,  0],
#         [ 0,  0,  0]], dtype=torch.int32)
# tensor size: torch.Size([5, 3])


tensor_03 = torch.cat((tensor_01, torch.IntTensor([[0, 0], [0, 0]])), 1) # 列方向拼接
print('\ntensor_03:\n', tensor_03, '\ntensor size:', tensor_03.size())

# 输出
# tensor_03:
#  tensor([[ 1, -8, -2,  0,  0],
#         [ 2, 10,  3,  0,  0]], dtype=torch.int32)
# tensor size: torch.Size([2, 5])

 

6.tensor.expand()

扩展张量,以二维张量为例:

tensor是1*n或n*1维的张量,分别调用tensor.expand(s, n)或tensor.expand(n, s)在行方向和列方向进行扩展。

# tensor.expand
tensor_01 = torch.IntTensor([[1, 2, 3]])
print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size())

# 输出
# tensor_01:
#  tensor([[1, 2, 3]], dtype=torch.int32)
# tensor size: torch.Size([1, 3])


tensor_02 = tensor_01.expand([2, 3])
print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size())

# 输出
# tensor_02:
#  tensor([[1, 2, 3],
#         [1, 2, 3]], dtype=torch.int32)
# tensor size: torch.Size([2, 3])

 

7.tensor.narrow(dim, start, len)

缩小张量,将第dim维由start位置处开始取len长的张量。

# tensor.narraw
tensor_01 = torch.IntTensor([[1, 2, 1, 3], [3, 2, 3, 4]])
print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size())

# 输出
# tensor_01:
#  tensor([[1, 2, 1, 3],
#         [3, 2, 3, 4]], dtype=torch.int32)
# tensor size: torch.Size([2, 4])


tensor_02 = tensor_01.narrow(1, 1, 2)
print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size())

# 输出
# tensor_02:
#  tensor([[2, 1],
#         [2, 3]], dtype=torch.int32)
# tensor size: torch.Size([2, 2])

 

8. tensor.resize_()

尺寸变化,将tensor截断为resize_后的维度

# tensor.resize_
tensor_01 = torch.IntTensor([[1, 2, 1], [3, 2, 3]])
print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size())

# 输出
# tensor_01:
#  tensor([[1, 2, 1],
#         [3, 2, 3]], dtype=torch.int32)
# tensor size: torch.Size([2, 3])


tensor_02 = tensor_01.resize_(2, 2)
print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size())

# 输出
# tensor_02:
#  tensor([[1, 2],
#         [1, 3]], dtype=torch.int32)
# tensor size: torch.Size([2, 2])

 

9.tensor.repeat()

tensor.repeat(a,b)将tensor整体在行方向复制a份,在列方向上复制b份

# tensor.repeat
tensor_01 = torch.IntTensor([[1, 2, 1], [3, 2, 3]])
print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size())

# 输出
# tensor_01:
#  tensor([[1, 2, 1],
#         [3, 2, 3]], dtype=torch.int32) 
# tensor size: torch.Size([2, 3])


tensor_02 = tensor_01.repeat([2, 3])
print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size())

# 输出
# tensor_02:
#  tensor([[1, 2, 1, 1, 2, 1, 1, 2, 1],
#         [3, 2, 3, 3, 2, 3, 3, 2, 3],
#         [1, 2, 1, 1, 2, 1, 1, 2, 1],
#         [3, 2, 3, 3, 2, 3, 3, 2, 3]], dtype=torch.int32)
# tensor size: torch.Size([4, 9])

 

10.tensor.unfold(dim, start, step)

tensor_01 = torch.IntTensor([[1, 2, 1], [3, 2, 3]])
print('\ntensor_01:\n', tensor_01, '\ntensor size:', tensor_01.size())

# 输出
# tensor_01:
#  tensor([[1, 2, 1],
#         [3, 2, 3]], dtype=torch.int32) 
# tensor size: torch.Size([2, 3])


tensor_02 = tensor_01.unfold(1, 2, 2)
print('\ntensor_02:\n', tensor_02, '\ntensor size:', tensor_02.size())
# 输出
# tensor_02:
#  tensor([[[1, 2]],
#
#         [[3, 2]]], dtype=torch.int32)
# tensor size: torch.Size([2, 1, 2])

 

PyTorch中,torch.Tensor是一个多维数组的数据类型。它可以表示不同维度张量维度是一个抽象的概念,可以用来描述张量的形状和大小。每个维度都对应着张量的一个轴,可以通过索引来访问张量中的元素。 例如,一个二维张量可以表示为一个矩阵,其中第一个维度表示行,第二个维度表示列。一个三维张量可以表示为一个立方体,其中第一个维度表示深度,第二个维度表示行,第三个维度表示列。更高维度张量可以类似地解释。 torch.Tensor.expand()函数可以用来改变张量的形状,可以通过指定新的维度来扩展原有的维度。这个函数的官方文档可以在中找到。 torch.Tensor.expand_as()函数则是用来将一个张量扩展为与另一个张量具有相同形状的函数。这个函数的官方文档可以在中找到。 所以,torch.tensor张量维度是指张量的形状和大小,可以通过expand()和expand_as()函数来改变或扩展。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [图像解析 torch.Tensor维度概念 && 用 torch.randn 举例](https://blog.csdn.net/qq_54185421/article/details/124896084)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] - *2* *3* [【Pytorch】torch.Tensor.expand_as()与torch.Tensor.expand()使用与比较](https://blog.csdn.net/meiqi0538/article/details/124394608)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值