10,09_维度变换,view、reshape,unsqueeze,Squeeze,expand,repeat,矩阵转置,transpose维度交换,permute,Broadcasting

1.9.Tensor维度变换
1.9.1.view、reshape
1.9.2.unsqueeze
1.9.3.Squeeze
1.9.4.expand
1.9.5.repeat
1.9.6.矩阵转置
1.9.7.transpose维度交换
1.9.8.permute
1.10.Broadcasting

1.9.Tensor维度变换

1.9.1.view、reshape

1.两者功能一样:将数据依次展开后,再变形
2.变形后的数据量与变形前数据量必须相等。即满足维度:ab…f = xy…z
3.reshape是pytorch根据numpy中的reshape来的。
4.-1表示,其他维度数据已给出情况下。

# -*- coding: UTF-8 -*-

import torch
a = torch.rand(2,3,2,3)
print(a)
"""
输出:
tensor([[[[0.6265, 0.0326, 0.1273],
          [0.3120, 0.4449, 0.5718]],
         [[0.1046, 0.7776, 0.9645],
          [0.1902, 0.7404, 0.0302]],
         [[0.5433, 0.1578, 0.6770],
          [0.5591, 0.2102, 0.6344]]],
        [[[0.5778, 0.3997, 0.9127],
          [0.8123, 0.8689, 0.4470]],
         [[0.2341, 0.0349, 0.2872],
          [0.0609, 0.5397, 0.3100]],
         [[0.4623, 0.0076, 0.3278],
          [0.6248, 0.0155, 0.1211]]]])
"""

# 将数据依次展开后,再变形为对应维度。所以,数据的顺序是一样的。且总的数据维度是不变的。
print(a.view(4, 9))
"""
输出结果:
tensor([[0.2667, 0.4700, 0.0454, 0.9939, 0.7008, 0.1362, 0.7544, 0.4472, 0.2111],
        [0.5638, 0.8971, 0.3139, 0.8841, 0.4913, 0.2058, 0.8583, 0.1505, 0.0892],
        [0.7533, 0.7021, 0.6080, 0.6848, 0.5243, 0.1918, 0.4089, 0.3315, 0.7276],
        [0.9459, 0.1081, 0.0461, 0.8141, 0.5869, 0.5843, 0.2659, 0.7687, 0.4242]])
"""

# reshape和view的效果一样
print(a.reshape(4, 9))
"""
输出结果:
tensor([[0.7789, 0.1785, 0.6387, 0.7912, 0.1439, 0.2618, 0.6727, 0.4800, 0.0564],
        [0.3654, 0.0296, 0.6067, 0.3572, 0.0439, 0.6657, 0.4368, 0.2253, 0.2332],
        [0.3263, 0.7157, 0.4796, 0.1004, 0.6954, 0.6626, 0.0379, 0.4982, 0.9791],
        [0.5354, 0.4496, 0.7845, 0.4297, 0.7509, 0.0035, 0.1737, 0.8664, 0.2800]])
"""

# -1表示,除了其他维度以外的最大维度数据量
# 此处为2*3*2*3=36, 36/4=9
print(a.reshape(4, -1))
"""
输出结果:
tensor([[0.1846, 0.5650, 0.4715, 0.1910, 0.5250, 0.6251, 0.4083, 0.3805, 0.9292],
        [0.7696, 0.2830, 0.0341, 0.3543, 0.8685, 0.4288, 0.2090, 0.5807, 0.1181],
        [0.0733, 0.9524, 0.8143, 0.4773, 0.3792, 0.8397, 0.0287, 0.6018, 0.6265],
        [0.6263, 0.1427, 0.5324, 0.7368, 0.7149, 0.6638, 0.5361, 0.0298, 0.4793]])
"""

print("------------------------------------------")

# 此处为2*3*2*3=36, 36/4=9
print(a.reshape(4, -1))
"""
输出结果:
tensor([[0.3042, 0.5191, 0.7625, 0.9274, 0.1921, 0.0750, 0.1138, 0.5479, 0.3058],
        [0.2298, 0.8897, 0.2379, 0.9171, 0.3238, 0.9612, 0.1647, 0.8832, 0.4885],
        [0.0331, 0.8462, 0.4061, 0.5430, 0.9528, 0.8883, 0.0474, 0.5019, 0.2368],
        [0.7151, 0.5584, 0.2686, 0.6878, 0.0322, 0.3292, 0.9116, 0.4619, 0.3833]])
"""

print(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>")
# 36/(2 * 3) = 6
# 所以此时-1表示6
print(a.reshape(2, 3, -1))
"""
tensor([[[0.1343, 0.1076, 0.6838, 0.4183, 0.6465, 0.7390],
         [0.3154, 0.6980, 0.3021, 0.9203, 0.3105, 0.9567],
         [0.5732, 0.5508, 0.6185, 0.1907, 0.5157, 0.0187]],
        [[0.2193, 0.6670, 0.8597, 0.7905, 0.0997, 0.0762],
         [0.6765, 0.3075, 0.7198, 0.7030, 0.1858, 0.5082],
         [0.4665, 0.3556, 0.0429, 0.2645, 0.3454, 0.7767]]])
"""

print("======================================")

# 给定的-1以外的其他维度数据的乘积,必须能整除总的数据量
# 比如,此处的5不能整除36,所以会报错
# print(a.reshape(5, -1))
"""

输出结果:
Traceback (most recent call last):
  File "D:\installed\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-2-7ad3dc423fa5>", line 1, in <module>
    runfile('E:/workspace/pytorch-learn/09_维度变换/09_维度变换.py', wdir='E:/workspace/pytorch-learn/09_维度变换')
  File "C:\Program Files\JetBrains\PyCharm 2019.2.4\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile
    pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script
  File "C:\Program Files\JetBrains\PyCharm 2019.2.4\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "E:/workspace/pytorch-learn/09_维度变换/09_维度变换.py", line 82, in <module>
    print(a.reshape(5, -1))
RuntimeError: shape '[5, -1]' is invalid for input of size 36
"""

print(a.view(4, 9).view(2, 3, 2, 3))
"""
tensor([[[[0.8306, 0.3289, 0.6837],
          [0.1553, 0.8128, 0.4839]],
          
         [[0.2005, 0.5720, 0.3754],
          [0.0711, 0.2138, 0.6295]],
          
         [[0.3155, 0.8848, 0.3345],
          [0.7782, 0.6487, 0.5341]]],
          
        [[[0.5153, 0.3421, 0.2283],
          [0.1855, 0.2059, 0.1031]],
          
         [[0.1763, 0.3316, 0.6055],
          [0.8860, 0.3544, 0.6888]],
          
         [[0.7657, 0.7608, 0.8756],
          [0.7559, 0.3434, 0.7446]]]])
"""

1.9.2.unsqueeze

1.功能:指定维度,为其增加(插入)1个维度
2.必须给定维度数据,不然会报错

# -*- coding: UTF-8 -*-

import torch

a = torch.randn(2, 3)
print(a)
"""
输出:
tensor([[ 0.3445,  1.0431,  0.0649],
        [ 2.0011, -1.9437,  1.3459]])
"""

# 在0维处,增加1个维度
print(a.unsqueeze(0))
# 支持负数索引,等价于 a.unsqueeze(-3)
"""
输出:
tensor([[[ 0.1839, -0.2338, -0.7837],
         [ 0.5659,  0.1431,  0.6513]]])
"""

print(a.unsqueeze(1))
# 等价于a.unsqueeze(-2)
"""
输出:
tensor([[[-0.9794,  1.5992,  1.6226]],
        [[-0.1168, -0.5305,  0.2656]]])
"""

print(a.unsqueeze(2))
# 等价于a.unsqueeze(-1)
"""
输出:
tensor([[[-0.9512],
         [-0.0490],
         [ 0.3785]],
        [[-0.6573],
         [ 0.0302],
         [ 0.4737]]])
"""

# 分别在0和最后维度上面都增加了一个维度,相比上面的数据,外层多了一对[]括号
a.unsqueeze(0).unsqueeze(-1)    
"""
tensor([[[-0.6981],
         [-0.8682],
         [-0.3138]],
         
        [[ 0.0167],
         [-0.1493],
         [ 0.1665]]])
"""

1.9.3.Squeeze

Squeeze删除为1的维度

1.默认将所有为1的维度。
2.传入指定维度,删除指定维度为1的维度。

# -*- coding: UTF-8 -*-

import torch

a = torch.rand(2, 1, 3, 1, 2, 1)
print(a)
"""
tensor([[[[[[0.7556],
            [0.6847]]],
            
          [[[0.3153],
            [0.5923]]],
            
          [[[0.5427],
            [0.5048]]]]],
            
        [[[[[0.3298],
            [0.9076]]],
            
          [[[0.3010],
            [0.2370]]],
            
          [[[0.7194],
            [0.4149]]]]]])
"""

# 默认删除所有为1的维度,即(2, 1, 3, 1, 2, 1)删除后得到(2, 3, 2)
print(a.squeeze())
"""
输出:
tensor([[[0.7331, 0.1108],
         [0.8523, 0.4815],
         [0.7817, 0.9866]],
        [[0.2483, 0.1248],
         [0.3179, 0.2502],
         [0.2722, 0.9329]]])
"""

print(a)
"""
tensor([[[[[[0.7331],
            [0.1108]]],
            
          [[[0.8523],
            [0.4815]]],
            
          [[[0.7817],
            [0.9866]]]]],
            
        [[[[[0.2483],
            [0.1248]]],
            
          [[[0.3179],
            [0.2502]]],
            
          [[[0.2722],
            [0.9329]]]]]])
"""

# 通过指定维度删除1,与上面效果相同
print(a.squeeze(1).squeeze(2).squeeze(3))
"""
输出:
tensor([[[0.2862, 0.6455],
         [0.2383, 0.2630],
         [0.6638, 0.4293]],
         
        [[0.6390, 0.1212],
         [0.9523, 0.7255],
         [0.5178, 0.3208]]])
"""

1.9.4.expand

扩大张量:torch.Tensor.expand(*sizes) -->Tensor
返回 tensor 的一个新视图,单个维度扩大为更大的尺寸。 tensor 也可以扩大为更高维,新增加的维度将附在前面。 扩大 tensor 不需要分配新内存,只是仅仅新建一个 tensor 的视图,其中通过将 stride 设为 0,一维将会扩展位更高维。任何一个一维的在不分配新内存情况下可扩展为任意的数值。

1.功能:扩展维度到指定维度。扩展的数据是复制当前维度的数据。
2.只有只有维度大小为1的维度可以扩展
3.-1表示保持原维度

# -*- coding: UTF-8 -*-

import torch

a = torch.rand(2,3,1)
print(a)
"""
输出结果:
tensor([[[0.9564],
         [0.0760],
         [0.2326]],
        [[0.9647],
         [0.8662],
         [0.7067]]])
"""

# 扩展数据,只有维度为1的地方可以被扩展,(2, -1, 3)等价于(2, 3, 3)
print(a.expand(2, -1, 3))
"""
输出结果:
tensor([[[0.9564, 0.9564, 0.9564],
         [0.0760, 0.0760, 0.0760],
         [0.2326, 0.2326, 0.2326]],
         
        [[0.9647, 0.9647, 0.9647],
         [0.8662, 0.8662, 0.8662],
         [0.7067, 0.7067, 0.7067]]])
"""

1.9.5.repeat

torch.Tensor.repeat(*sizes)
沿着指定的维度重复tensor。不同于expand(),本函数复制的是tensor中的数据。

1.表示扩展数据到原维度的倍数。
2.1表保持原维度。
3.不建议使用此方法,因此复制数据会重新申请内存空间,比较占内存。

# -*- coding: UTF-8 -*-

import torch

a = torch.rand(2, 3, 1)

print(a)
"""
tensor([[[0.8991],
         [0.5624],
         [0.0676]],
         
        [[0.9069],
         [0.2048],
         [0.0300]]])
"""

print(a.repeat(1, 1, 3))
"""
tensor([[[0.8991, 0.8991, 0.8991],
         [0.5624, 0.5624, 0.5624],
         [0.0676, 0.0676, 0.0676]],
         
        [[0.9069, 0.9069, 0.9069],
         [0.2048, 0.2048, 0.2048],
         [0.0300, 0.0300, 0.0300]]])
"""

1.9.6.矩阵转置

1.行列互换
2…t()只支持2D的数据

a = torch.rand(5, 2)
print(a)
"""
tensor([[0.2898, 0.7774],
        [0.3639, 0.1799],
        [0.7717, 0.8179],
        [0.9935, 0.7679],
        [0.6835, 0.4301]])
"""

print(a.t())
"""
tensor([[0.2898, 0.3639, 0.7717, 0.9935, 0.6835],
        [0.7774, 0.1799, 0.8179, 0.7679, 0.4301]])
"""

1.9.7.transpose维度交换

torch.transpose(input, dim0, dim1, out=None)–>Tensor
返回输入矩阵input的转置。交换维度dim0和dim1。输出张量与输入张量共享内存,所以改变其中一个会导致另外一个也被修改。

1.相当于被选择的两个维度数据进行转置,即:行列互换
2.也可以理解为,每一行数据顺时旋转90度为一列后,依次拼接在右边(最后面)

# -*- coding: UTF-8 -*-

import torch

a = torch.rand(2, 3, 4)
print(a)
"""
tensor([[[0.7630, 0.1244, 0.0539, 0.9472],
         [0.6098, 0.4826, 0.5270, 0.7383],
         [0.7582, 0.1667, 0.1692, 0.1481]],
         
        [[0.1000, 0.7165, 0.1339, 0.3319],
         [0.4485, 0.0099, 0.5476, 0.5260],
         [0.5430, 0.3896, 0.9594, 0.6139]]])
"""

print(a.transpose(0, 2))
"""
tensor([[[0.7630, 0.1000],
         [0.6098, 0.4485],
         [0.7582, 0.5430]],
         
        [[0.1244, 0.7165],
         [0.4826, 0.0099],
         [0.1667, 0.3896]],
         
        [[0.0539, 0.1339],
         [0.5270, 0.5476],
         [0.1692, 0.9594]],
         
        [[0.9472, 0.3319],
         [0.7383, 0.5260],
         [0.1481, 0.6139]]])
"""

print(a.transpose(0, 2).contiguous().view(4, 6))
"""
tensor([[0.7630, 0.1000, 0.6098, 0.4485, 0.7582, 0.5430],
        [0.1244, 0.7165, 0.4826, 0.0099, 0.1667, 0.3896],
        [0.0539, 0.1339, 0.5270, 0.5476, 0.1692, 0.9594],
        [0.9472, 0.3319, 0.7383, 0.5260, 0.1481, 0.6139]])
"""

1.9.8.permute

permute(dims)将tensor的维度换位。参数是一系列的整数,代表原来张量的维度。再比如图片img的size是(28,28,3)就可以利用img.permute(2,0,1)得到一个size为(3,28,28)的tensor.

与transpose不同的是:
1.transpose一次只能进行两个维度之间的交换
2.permute可以一次进行多个维度的交换。其实质就是实现多个transpose步骤。
3.给定维度索引。

# -*- coding: UTF-8 -*-

import torch

a = torch.rand(2, 3)
print(a)
# print(a.size(0))  # a.size(0) 结果为2。 a.size(1) 结果为3
"""
输出:
tensor([[0.5538, 0.7207, 0.9182],
        [0.7058, 0.9794, 0.2337]])
"""

print(a.permute(1, 0))
"""
输出:
tensor([[0.5538, 0.7058],
        [0.7207, 0.9794],
        [0.9182, 0.2337]])
"""

# 通过2次transpose可以实现permute的效果
print(a.transpose(0, 1))
"""
输出:
tensor([[0.5538, 0.7058],
        [0.7207, 0.9794],
        [0.9182, 0.2337]])
"""

1.10.Broadcasting

1.定义:两个维度不同的数据相加,会自动将维度小的数据扩张到与大维度相同的维度。
2.特点:自动扩展维度,并且不用复制数据,节省空间。
3.前提:小维度的数据,要么不给定维度,要么给定1维,要么与原数据维度大小相同的维度,否则会报错。
在这里插入图片描述

# -*- coding: UTF-8 -*-

import torch
a = torch.rand(3, 4, 2)
print(a)

# b的第一维度为1,a的为3
b = torch.rand(1, 4, 2)
print(b)


# 相加和,b被自动广播为3,然后再与a相加,得到结果
print(a + b)

输出结果:

tensor([[[0.2051, 0.3002],
         [0.7875, 0.8702],
         [0.8102, 0.1946],
         [0.4178, 0.0421]],
        [[0.5986, 0.1744],
         [0.9141, 0.6217],
         [0.3636, 0.6171],
         [0.5568, 0.9626]],
        [[0.8271, 0.1875],
         [0.8445, 0.5693],
         [0.2277, 0.1845],
         [0.4858, 0.4569]]])
tensor([[[0.5436, 0.3877],
         [0.8810, 0.7022],
         [0.9177, 0.0713],
         [0.8664, 0.9766]]])
tensor([[[0.7487, 0.6879],
         [1.6685, 1.5723],
         [1.7280, 0.2660],
         [1.2842, 1.0187]],

        [[1.1422, 0.5621],
         [1.7951, 1.3239],
         [1.2814, 0.6885],
         [1.4232, 1.9393]],

        [[1.3707, 0.5752],
         [1.7255, 1.2715],
         [1.1454, 0.2558],
         [1.3523, 1.4336]]])
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

涂作权的博客

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值