Pytorch(1):张量创建、常用操作总结

print("If you can't explain it simply, you don't understand it well enough.")

Tensor
一、创建
1、依据 data 创建张量,data 可以是 list、ndarray
torch.tensor(data, dtype=None, device=None, requires_grad=False)
2、特殊张量
# 全零
torch.zeros(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)

# 全一
torch.ones(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)

# 对角线为一
torch.eye(n, m=None, out=None)

# 均匀间隔创建一维张量
torch.linspace(start, end, steps=100, out=None) → Tensor

>>> torch.linspace(3, 10, steps=5)
tensor([ 3.0000,  4.7500,  6.5000,  8.2500, 10.0000])
3、依概率分布生成随机张量
  • 均匀分布,区间 [0, 1)
torch.rand(*sizes, out=None) → Tensor

A = torch.rand(*sizes, out=None)
# 等价于
torch.rand(*sizes, out=A)
  • 标准正态分布
torch.randn(*sizes, out=None) → Tensor
  • 离散正态分布
torch.normal(means, std, out=None) → Tensor
4、其他
torch.from_numpy(ndarray) → Tensor

将 numpy.ndarray 转换为 pytorch 的 Tensor。 返回的张量 tensor 和 numpy 的 ndarray 共享同一内存空间。修改一个会导致另外一个也被修改。返回的张量不能改变大小。

二、操作(张量内)
1、查看

维度

tensor.shape

# 等价于
tensor.size()

>>> x = torch.randn(1)

>>> print(x)
tensor([ 0.9422])

>>> print(x.item())
0.9422121644020081
2、切片 & 索引

Python 切片入门:https://blog.csdn.net/weixin_37641832/article/details/85019378

你可以使用标准的 NumPy 类似的索引操作

>>> x = torch.rand(5, 3)
tensor([[0.4855, 0.5683, 0.4672],
        [0.2081, 0.9601, 0.1051],
        [0.2781, 0.9928, 0.9806],
        [0.0874, 0.4235, 0.3454],
        [0.9175, 0.4068, 0.1874]])

# 选取所有行,只选取第 2 列
>>> x[:, 1: 2]
tensor([[0.5683],
        [0.9601],
        [0.9928],
        [0.4235],
        [0.4068]])

# 选取所有行,只选取第 2 列,但会...
>>> x[:, 1]
tensor([0.5683, 0.9601, 0.9928, 0.4235, 0.4068])

# 选取所有行,选取从第 2 列开始的所有列
>>> x[:, 1:]
tensor([[0.5683, 0.4672],
        [0.9601, 0.1051],
        [0.9928, 0.9806],
        [0.4235, 0.3454],
        [0.4068, 0.1874]])

masked_select(input, mask, out=None) -> Tensor

>>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.3552, -2.3825, -0.8297,  0.3477],
        [-1.2035,  1.2252,  0.5002,  0.6248],
        [ 0.1307, -2.0608,  0.1244,  2.0139]])

# 大于等于0.5
>>> mask = x.ge(0.5)
>>> mask
tensor([[False, False, False, False],
        [False, True, True, True],
        [False, False, False, True]])

>>> torch.masked_select(x, mask)
tensor([ 1.2252,  0.5002,  0.6248,  2.0139])

index_select(input, dim, index, out=None) -> Tensor

>>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.1427,  0.0231, -0.5414, -1.0009],
        [-0.4664,  0.2647, -0.1228, -1.1068],
        [-1.1734, -0.6571,  0.7230, -0.6004]])

>>> indices = torch.tensor([0, 2])
>>> torch.index_select(x, dim=0, indices)
tensor([[ 0.1427,  0.0231, -0.5414, -1.0009],
        [-1.1734, -0.6571,  0.7230, -0.6004]])

>>> torch.index_select(x, dim=1, indices)
tensor([[ 0.1427, -0.5414],
        [-0.4664, -0.1228],
        [-1.1734,  0.7230]])
3、改变形状:view()
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8)  # the size -1 is inferred from other dimensions

print(x.size(), y.size(), z.size())

输出

torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])

reshape(input, shape) -> Tensor

注意事项:当张量在内存中是连续时,新张量与 input 共享数据内存

>>> a = torch.arange(4.)
tensor([0., 1., 2., 3.])

>>> b = torch.reshape(a, (2, 2))
tensor([[0., 1.],
        [2., 3.]])

# 将 b的第一行都改为 1024
>>> b[0] = 1024>>> a
tensor([1024., 1024.,    2.,    3.])
4、交换维度:transpose()
>>> a = torch.randn(1, 2, 3, 4)

>>> a.size()
torch.Size([1, 2, 3, 4])

>>> b = a.transpose(1, 2)  # Swaps 2nd and 3rd dimension
>>> b.size()
torch.Size([1, 3, 2, 4])

>>> c = a.view(1, 3, 2, 4)  # Does not change tensor layout in memory
>>> c.size()
torch.Size([1, 3, 2, 4])

>>> torch.equal(b, c)
False
5、转置
>>> x = torch.rand(2, 3)

>>> print(x)
tensor([[0.4374, 0.6915, 0.9269],
        [0.9836, 0.3372, 0.6941]])

>>> print(x.t())
tensor([[0.4374, 0.9836],
        [0.6915, 0.3372],
        [0.9269, 0.6941]])
6、squeeze(input, dim=None, out=None) -> Tensor
  • dim:若为None,移除所有长度为 1 的轴;若指定维度,当且仅当该轴长度为 1 是,可以被移除
>>> x = torch.zeros(2, 1, 2, 1, 2)
>>> x.size()
torch.Size([2, 1, 2, 1, 2])

>>> y = torch.squeeze(x)
>>> y.size()
torch.Size([2, 2, 2])

>>> y = torch.squeeze(x, 0)
>>> y.size()
torch.Size([2, 1, 2, 1, 2])

>>> y = torch.squeeze(x, 1)
>>> y.size()
torch.Size([2, 2, 1, 2])
7、unsqueeze(input, dim, out=None) -> Tensor
>>> x = torch.tensor([1, 2, 3, 4])

>>> torch.unsqueeze(x, 0)
tensor([[ 1,  2,  3,  4]])

>>> torch.unsqueeze(x, 1)
tensor([[ 1],
        [ 2],
        [ 3],
        [ 4]])
三、操作(张量间)
1、数学运算

建议参考官方文档:https://pytorch.org/docs/stable/torch.html#math-operations

torch.add()
torch.sub()
torch.div()
torch.mul()    # 逐项元素相乘,并非矩阵相乘
torch.mm()     # 矩阵乘法

torch.addcdiv()
torch.addcmul()

torch.abs(input,out=None)
torch.log(input,out=None)
torch.log1e(input,out=None)
torch.1og2(input,out=None)
torch.exp(input,out=None)
torch.pow()

torch.acos(input,out=None)
torch.cosh(input,out=None)
torch.cos(input,out=None)
torch.asin(input,out=None)
torch.atan(input,out=None)
torch.atan2(input,other,out=None)


a.floor()	# 向下取整
a.ceil()	# 向上取整
a.trunc()	# 取整部分
a.frac()	# 取小数部分
a.round()   # 四舍五入

下面以加法为例,简单介绍

加法: 方式 1

y = torch.rand(5, 3)
print(x + y)

输出:

tensor([[-0.1859,  1.3970,  0.5236],
        [ 2.3854,  0.0707,  2.1970],
        [-0.3587,  1.2359,  1.8951],
        [-0.1189, -0.1376,  0.4647],
        [-1.8968,  2.0164,  0.1092]])

加法: 方式2

print(torch.add(x, y))

输出:

tensor([[-0.1859,  1.3970,  0.5236],
        [ 2.3854,  0.0707,  2.1970],
        [-0.3587,  1.2359,  1.8951],
        [-0.1189, -0.1376,  0.4647],
        [-1.8968,  2.0164,  0.1092]])

加法: 提供一个输出 tensor 作为参数

result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)

输出:

tensor([[-0.1859,  1.3970,  0.5236],
        [ 2.3854,  0.0707,  2.1970],
        [-0.3587,  1.2359,  1.8951],
        [-0.1189, -0.1376,  0.4647],
        [-1.8968,  2.0164,  0.1092]])

加法: in-place

# adds x to y
y.add_(x)
print(y)

输出:

tensor([[-0.1859,  1.3970,  0.5236],
        [ 2.3854,  0.0707,  2.1970],
        [-0.3587,  1.2359,  1.8951],
        [-0.1189, -0.1376,  0.4647],
        [-1.8968,  2.0164,  0.1092]])
2、裁剪运算

参考:https://blog.csdn.net/weicao1990/article/details/93738722

即对 Tensor 中的元素进行范围过滤,不符合条件的可以把它变换到范围内部(边界)上,常用于梯度裁剪(Gradient Clipping),即在发生梯度离散或者梯度爆炸时对梯度的处理,实际使用时可以查看梯度的(L2范数)模来看看需不需要做处理:w.grad.norm(2)。

示例代码:

import torch
 
grad = torch.rand(2, 3) * 15  # 0~15随机生成
print(grad.max(), grad.min(), grad.median())  # 最大值最小值平均值
 
print(grad)
print(grad.clamp(10))  # 最小是10,小于10的都变成10
print(grad.clamp(3, 10))  # 最小是3,小于3的都变成3;最大是10,大于10的都变成10

输出结果:

tensor(14.7400) tensor(1.8522) tensor(10.5734)
tensor([[ 1.8522, 14.7400,  8.2445],
        [13.5520, 10.5734, 12.9756]])
tensor([[10.0000, 14.7400, 10.0000],
        [13.5520, 10.5734, 12.9756]])
tensor([[ 3.0000, 10.0000,  8.2445],
        [10.0000, 10.0000, 10.0000]])
3、拼接 & 分块

tensor.cat(tensors, dim=0, out=None) -> Tensor

>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497]])

>>> torch.cat((x, x, x), 0)
tensor([[ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497],
        [ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497],
        [ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497]])

>>> torch.cat((x, x, x), 1)
tensor([[ 0.6580, -1.0969, -0.4614,  0.6580, -1.0969, -0.4614,  0.6580,
         -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497, -0.1034, -0.5790,  0.1497, -0.1034,
         -0.5790,  0.1497]])

stack(tensors, dim=0, out=None) -> Tensor

Concatenates sequence of tensors along a new dimension.

>>> x
tensor([[-1.2434, -0.1263, -0.0199, -0.4011],
        [ 1.6301, -0.8156,  1.3553,  0.6736],
        [ 0.0187,  1.4521,  1.3666,  0.8626],
        [ 0.5638,  1.8207, -0.1588,  1.9605]])
)

>>> torch.stack((x, x), dim=0)
tensor([[[-1.2434, -0.1263, -0.0199, -0.4011],
         [ 1.6301, -0.8156,  1.3553,  0.6736],
         [ 0.0187,  1.4521,  1.3666,  0.8626],
         [ 0.5638,  1.8207, -0.1588,  1.9605]],

        [[-1.2434, -0.1263, -0.0199, -0.4011],
         [ 1.6301, -0.8156,  1.3553,  0.6736],
         [ 0.0187,  1.4521,  1.3666,  0.8626],
         [ 0.5638,  1.8207, -0.1588,  1.9605]]])

tensor.split(tensor, split_size_or_sections, dim=0)

>>> x
tensor([[-1.2434, -0.1263, -0.0199, -0.4011],
        [ 1.6301, -0.8156,  1.3553,  0.6736],
        [ 0.0187,  1.4521,  1.3666,  0.8626],
        [ 0.5638,  1.8207, -0.1588,  1.9605]])

>>> torch.split(x, (1, 2, 1), dim=0)
(tensor([[-1.2434, -0.1263, -0.0199, -0.4011]]),
 tensor([[ 1.6301, -0.8156,  1.3553,  0.6736],
         [ 0.0187,  1.4521,  1.3666,  0.8626]]),
 tensor([[ 0.5638,  1.8207, -0.1588,  1.9605]]))

tensor.chunk(input, chunks, dim=0) -> List of Tensors

# 均分为4块
>>> torch.chunk(x, 4, dim=0)
(tensor([[-1.2434, -0.1263, -0.0199, -0.4011]]),
 tensor([[ 1.6301, -0.8156,  1.3553,  0.6736]]),
 tensor([[0.0187, 1.4521, 1.3666, 0.8626]]),
 tensor([[ 0.5638,  1.8207, -0.1588,  1.9605]]))
四、其他常用
  • tensor 和 numpy 的数据类型互相转换
ndarray_0 = tensor_0.detach().numpy()

tensor_0 = torch.from_numpy(ndarray_0)
References

https://blog.csdn.net/qq_35012749/article/details/88235837

https://blog.csdn.net/sherpahu/article/details/95935845

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值