pytorch tensor的数据类型

**

一、维度为0的标量

**
一般用于loss计算

In [1]: import torch

In [2]: a = torch.tensor(1.)#维度为0

In [3]: b = torch.tensor(1.3)

In [4]: c = torch.tensor(2.2)

In [5]: a.shape,b.shape,c.shape
Out[5]: (torch.Size([]), torch.Size([]), torch.Size([]))

In [6]: len(a.shape),len(b.shape),len(c.shape)
Out[6]: (0, 0, 0)

In [7]: a.size(),b.size(),c.size()
Out[7]: (torch.Size([]), torch.Size([]), torch.Size([]))

**

二、维度为1的矢量

**
一般用于Bias linear input
dim/rank:[2,3]2行3列,其维度(dim)为2,为shape/size的长度
shape/size:[2,3]2行3列

In [8]: torch.tensor([1.1])#指定矢量数据进行初始化
Out[8]: tensor([1.1000])

In [9]: torch.tensor([1.1,2.2])#指定矢量数据进行初始化
Out[9]: tensor([1.1000, 2.2000])

In [10]: torch.FloatTensor(1)#指定矢量长度,随机初始化,需要指定数据类型
Out[10]: tensor([-7.6196e+18])

In [11]: torch.FloatTensor(2)#指定矢量长度,随机初始化
Out[11]: tensor([  0.0000, 436.8717])
In [12]: import numpy as np

In [13]: data = np.ones(2)#以Numpy形式进行初始化,长度为2的矢量,内容全为1

In [14]: data
Out[14]: array([1., 1.])

In [15]: data = torch.from_numpy(data)#numpy转换成torch

In [16]: data
Out[16]: tensor([1., 1.], dtype=torch.float64)
In [17]: a = torch.ones(2)

In [18]: a.shape
Out[18]: torch.Size([2])

In [19]: a.size()
Out[19]: torch.Size([2])

**

三、维度为2的张量

**
Linear Input batch

In [20]: a = torch.randn(2,3)#生成2行3列的张量,服从正态分布的随机初始化

In [21]: a
Out[21]:
tensor([[ 1.0761, -0.0328,  0.7483],
        [ 1.4165, -0.0385,  0.1386]])

In [22]: a.shape
Out[22]: torch.Size([2, 3])

In [23]: a.size()
Out[23]: torch.Size([2, 3])

In [24]: a.size(1)#a的第二个维度数据
Out[24]: 3

In [25]: a.shape[1]
Out[25]: 3

In [26]: a.type()
Out[26]: 'torch.FloatTensor'

**

四、维度为3的张量

**
RNN Input Batch

In [27]: a = torch.rand(1,2,3)

In [28]: a
Out[28]:
tensor([[[0.6517, 0.0797, 0.3515],
         [0.7479, 0.3852, 0.1333]]])

In [29]: a.shape
Out[29]: torch.Size([1, 2, 3])

In [30]: a[0]
Out[30]:
tensor([[0.6517, 0.0797, 0.3515],
        [0.7479, 0.3852, 0.1333]])

In [31]: list(a.shape)#转换为list
Out[31]: [1, 2, 3]

In [32]: len(a.shape)
Out[32]: 3

**

五、维度为4的张量

**
CNN [b,c,h,w]

In [33]: a = torch.rand(2,3,28,28)

In [34]: a
Out[34]:
tensor([[[[0.4413, 0.6023, 0.8946,  ..., 0.0058, 0.4728, 0.4770],
          [0.4582, 0.5715, 0.7735,  ..., 0.1879, 0.0363, 0.1729],
          [0.6247, 0.3994, 0.4455,  ..., 0.0498, 0.8086, 0.7647],
          ...,
          [0.5249, 0.6952, 0.2938,  ..., 0.2363, 0.0110, 0.7989],
          [0.5593, 0.9656, 0.5264,  ..., 0.1644, 0.0069, 0.6697],
          [0.3970, 0.5731, 0.5084,  ..., 0.5066, 0.8005, 0.4400]],

         [[0.4275, 0.8033, 0.7818,  ..., 0.0160, 0.7293, 0.3070],
          [0.4083, 0.5342, 0.1377,  ..., 0.9584, 0.4040, 0.9301],
          [0.2077, 0.7207, 0.3162,  ..., 0.4281, 0.8195, 0.7971],
          ...,
          [0.1678, 0.1541, 0.4059,  ..., 0.7927, 0.5030, 0.2784],
          [0.0298, 0.7175, 0.2752,  ..., 0.5578, 0.0180, 0.1535],
          [0.0717, 0.2973, 0.7143,  ..., 0.0634, 0.2705, 0.9997]],

         [[0.5916, 0.1105, 0.1840,  ..., 0.7606, 0.8341, 0.4578],
          [0.3380, 0.2287, 0.9080,  ..., 0.6359, 0.9898, 0.6519],
          [0.5263, 0.1795, 0.7010,  ..., 0.7925, 0.8178, 0.4332],
          ...,
          [0.2794, 0.6113, 0.0250,  ..., 0.2741, 0.3120, 0.7886],
          [0.8941, 0.9090, 0.0788,  ..., 0.2348, 0.7738, 0.4604],
          [0.7085, 0.5492, 0.3585,  ..., 0.3319, 0.1001, 0.3708]]],


        [[[0.0674, 0.2430, 0.3269,  ..., 0.5516, 0.6014, 0.5814],
          [0.9130, 0.3829, 0.4547,  ..., 0.7470, 0.4801, 0.3078],
          [0.8683, 0.5182, 0.5958,  ..., 0.2616, 0.7910, 0.7378],
          ...,
          [0.9177, 0.8683, 0.8245,  ..., 0.3103, 0.4358, 0.6122],
          [0.7937, 0.3543, 0.2006,  ..., 0.5700, 0.2810, 0.5819],
          [0.2532, 0.1863, 0.3045,  ..., 0.2702, 0.1288, 0.8922]],

         [[0.4606, 0.5418, 0.3269,  ..., 0.7471, 0.0283, 0.2007],
          [0.4622, 0.8163, 0.0815,  ..., 0.6473, 0.9529, 0.8228],
          [0.0182, 0.1633, 0.3676,  ..., 0.4639, 0.0091, 0.8002],
          ...,
          [0.8746, 0.4621, 0.2347,  ..., 0.9328, 0.0392, 0.2405],
          [0.8779, 0.0938, 0.9562,  ..., 0.2929, 0.7262, 0.2129],
          [0.1163, 0.9473, 0.4633,  ..., 0.3965, 0.4285, 0.5328]],

         [[0.3419, 0.4119, 0.2530,  ..., 0.3823, 0.4562, 0.0200],
          [0.3054, 0.5046, 0.3681,  ..., 0.5600, 0.3573, 0.3020],
          [0.0272, 0.3559, 0.4984,  ..., 0.7811, 0.0495, 0.4028],
          ...,
          [0.6401, 0.3150, 0.7521,  ..., 0.8772, 0.5639, 0.7079],
          [0.5908, 0.0573, 0.3290,  ..., 0.8969, 0.6228, 0.5157],
          [0.9156, 0.6839, 0.1122,  ..., 0.0912, 0.0572, 0.8246]]]])

In [35]: a.shape
Out[35]: torch.Size([2, 3, 28, 28])
In [36]: a.numel()#tensor占用内存的数量
Out[36]: 4704

In [37]: a.dim()#tensor的维度
Out[37]: 4

In [38]: a = torch.tensor(1)

In [39]: a.dim()
Out[39]: 0
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值