Pytorch cat()与stack()函数详解

torch.cat()

cat为concatenate的缩写,意思为拼接,torch.cat()函数一般是用于张量拼接使用的

cat(tensors: Union[Tuple[Tensor, ...], List[Tensor]], dim: _int = 0, *, out: Optional[Tensor] = None) -> Tensor:

可以看到cat()函数的参数,常用的参数为,第一个参数:可以选择元组或者列表,内部包含需要拼接的张量,需要按照顺序排列,第二个参数为dim,用于指定需要拼接的维度

import torch
import numpy as np

data1 = torch.randint(0, 10, [2, 3, 4])
data2 = torch.randint(0, 10, [2, 3, 4])

print(data1)
print(data2)
print("-" * 20)

print(torch.cat([data1, data2], dim=0))
print(torch.cat([data1, data2], dim=1))
print(torch.cat([data1, data2], dim=2))
# tensor([[[9, 4, 0, 0],
#          [3, 3, 7, 6],
#          [6, 1, 0, 8]],
# 
#         [[9, 1, 1, 2],
#          [1, 0, 6, 4],
#          [7, 9, 3, 9]]])
# tensor([[[3, 2, 6, 3],
#          [8, 3, 1, 1],
#          [0, 9, 2, 5]],
# 
#         [[2, 6, 7, 5],
#          [9, 1, 0, 1],
#          [0, 6, 4, 4]]])
# --------------------
# tensor([[[9, 4, 0, 0],
#          [3, 3, 7, 6],
#          [6, 1, 0, 8]],
# 
#         [[9, 1, 1, 2],
#          [1, 0, 6, 4],
#          [7, 9, 3, 9]],
# 
#         [[3, 2, 6, 3],
#          [8, 3, 1, 1],
#          [0, 9, 2, 5]],
# 
#         [[2, 6, 7, 5],
#          [9, 1, 0, 1],
#          [0, 6, 4, 4]]])
# tensor([[[9, 4, 0, 0],
#          [3, 3, 7, 6],
#          [6, 1, 0, 8],
#          [3, 2, 6, 3],
#          [8, 3, 1, 1],
#          [0, 9, 2, 5]],
# 
#         [[9, 1, 1, 2],
#          [1, 0, 6, 4],
#          [7, 9, 3, 9],
#          [2, 6, 7, 5],
#          [9, 1, 0, 1],
#          [0, 6, 4, 4]]])
# tensor([[[9, 4, 0, 0, 3, 2, 6, 3],
#          [3, 3, 7, 6, 8, 3, 1, 1],
#          [6, 1, 0, 8, 0, 9, 2, 5]],
# 
#         [[9, 1, 1, 2, 2, 6, 7, 5],
#          [1, 0, 6, 4, 9, 1, 0, 1],
#          [7, 9, 3, 9, 0, 6, 4, 4]]])

上述代码演示了拼接维度为0,1,2的时候的结果,可以看出cat()并不会影响张量的维度,如上述的三维张量拼接,若dim为0则按块(后两位张量组成的二维张量)进行拼接,若dim为1则按行拼接,若dim为2则按列拼接

torch.stack()

stack为堆叠、栈的意思

stack(tensors: Union[Tuple[Tensor, ...], List[Tensor]], dim: _int = 0, *, out: Optional[Tensor] = None) -> Tensor: 

可以看到stack()和cat()的用法几乎一致,都是用于堆叠张量组成的列表或元组,以及堆叠的维度dim

import torch
import numpy as np

data1 = torch.randint(0, 10, [2, 3, 4])
data2 = torch.randint(0, 10, [2, 3, 4])

print(data1)
print(data2)
print("-" * 20)

data3 = torch.stack([data1, data2], dim=0)
data4 = torch.stack([data1, data2], dim=1)
data5 = torch.stack([data1, data2], dim=2)
data6 = torch.stack([data1, data2], dim=3)
print(data3.shape)
print(data3)
print(data4.shape)
print(data4)
print(data5.shape)
print(data5)
print(data6.shape)
print(data6)

# tensor([[[1, 6, 6, 1],
#          [3, 1, 8, 2],
#          [0, 4, 7, 3]],
# 
#         [[4, 7, 5, 6],
#          [5, 4, 0, 2],
#          [8, 0, 3, 0]]])
# tensor([[[5, 2, 7, 2],
#          [7, 4, 2, 0],
#          [8, 5, 5, 9]],
# 
#         [[7, 1, 5, 6],
#          [3, 5, 4, 7],
#          [1, 0, 8, 8]]])
# --------------------
# torch.Size([2, 2, 3, 4])
# tensor([[[[1, 6, 6, 1],
#           [3, 1, 8, 2],
#           [0, 4, 7, 3]],
# 
#          [[4, 7, 5, 6],
#           [5, 4, 0, 2],
#           [8, 0, 3, 0]]],
# 
# 
#         [[[5, 2, 7, 2],
#           [7, 4, 2, 0],
#           [8, 5, 5, 9]],
# 
#          [[7, 1, 5, 6],
#           [3, 5, 4, 7],
#           [1, 0, 8, 8]]]])
# torch.Size([2, 2, 3, 4])
# tensor([[[[1, 6, 6, 1],
#           [3, 1, 8, 2],
#           [0, 4, 7, 3]],
# 
#          [[5, 2, 7, 2],
#           [7, 4, 2, 0],
#           [8, 5, 5, 9]]],
# 
# 
#         [[[4, 7, 5, 6],
#           [5, 4, 0, 2],
#           [8, 0, 3, 0]],
# 
#          [[7, 1, 5, 6],
#           [3, 5, 4, 7],
#           [1, 0, 8, 8]]]])
# torch.Size([2, 3, 2, 4])
# tensor([[[[1, 6, 6, 1],
#           [5, 2, 7, 2]],
# 
#          [[3, 1, 8, 2],
#           [7, 4, 2, 0]],
# 
#          [[0, 4, 7, 3],
#           [8, 5, 5, 9]]],
# 
# 
#         [[[4, 7, 5, 6],
#           [7, 1, 5, 6]],
# 
#          [[5, 4, 0, 2],
#           [3, 5, 4, 7]],
# 
#          [[8, 0, 3, 0],
#           [1, 0, 8, 8]]]])
# torch.Size([2, 3, 4, 2])
# tensor([[[[1, 5],
#           [6, 2],
#           [6, 7],
#           [1, 2]],
# 
#          [[3, 7],
#           [1, 4],
#           [8, 2],
#           [2, 0]],
# 
#          [[0, 8],
#           [4, 5],
#           [7, 5],
#           [3, 9]]],
# 
# 
#         [[[4, 7],
#           [7, 1],
#           [5, 5],
#           [6, 6]],
# 
#          [[5, 3],
#           [4, 5],
#           [0, 4],
#           [2, 7]],
# 
#          [[8, 1],
#           [0, 0],
#           [3, 8],
#           [0, 8]]]])

可以看到dim设置为几,就会按第几个维度进行堆叠拼接,dim为0则是整体堆叠后升维,dim为1则是按第二个维度也就是后两维张量为一个整体进行两个张量对应堆叠拼接,dim为2为按后两维中的行进行堆叠拼接,dim为3也就是按两个张量的单个值进行对应堆叠拼接

stack()随着维度增加,理解会较为复杂,具体可见代码和结果演示

注意,cat()和stack()中的dim参数也可以使用负索引,即从-1开始进行维度索引

  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值