【Python】-【Pytorch】学习日记一 张量(Tensor)
从今天开始学习pytorch。
【Python】-【Pytorch】学习日记一 张量(Tensor)
【Python】-【Pytorch】学习日记一 张量Tensor
前言
学习的资料是pytorch中文文档1.4与中文文档1.7,还有说明文档
一、张量tensor
导入包
import torch
import numpy as np
1.1 初始化
- 从原始数据直接生成tensor
torch.tensor(data)
data = [[1,2],[3,4]]
x_data = torch.tensor(data)
- 从numpy生成tensor
torch.from_numpy(np_array)
np_array发生改变,tensor跟着改变
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
- 从tensor生成tensor
保留原始tensor的形状,改变数据。
填充数据全为1:torch.ones_like(x_data)
填充数据全为0:torch.empty_like(x_data)
0-1随机填充数据:torch.rand_like(x_data)
标准正态随机填充:torch.randn_like(x_data)
改变数据类型,添加参数dtype:torch.rand_like(x_data,dtype=float)
根据一个张量x_data,使用
x_data.new_ones(shape,dtype=float),
x_data.new_empty(shape,dtype=float),
x_data.new_zeros(shape,dtype=float),
x_data.new_full(shape,fill_value=1.2,dtype=float)
rand_like是Returns a tensor with the same size as input that is filled with random numbers from a uniform distribution on the interval [0, 1),也就是说rand_like一定是一个小数,所以如果input data是int类型会报错,一定要添加dtype参数进行转换
new_full使用指定值进行填充,要有参数fill_value。
x_ones1 = torch.ones_like(x_data)
print(f"Ones Tensor: \n {x_ones1} \n")
x_ones2 = torch.ones_like(x_data, dtype=float)
print(f"Ones Tensor: \n {x_ones2} \n")
x_rand2 = torch.rand_like(x_data, dtype=float)
print(f"Random Tensor: \n {x_rand2} \n")
x_rand1 = torch.rand_like(x_data)
print(f"Random Tensor: \n {x_rand1} \n")
Ones Tensor:
tensor([[1, 1],
[1, 1]])
Ones Tensor:
tensor([[1., 1.],
[1., 1.]], dtype=torch.float64)
Random Tensor:
tensor([[0.1761, 0.5174],
[0.3352, 0.0966]], dtype=torch.float64)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-14-c5eeb1f09fb7> in <module>
7 x_rand2 = torch.rand_like(x_data, dtype=float)
8 print(f"Random Tensor: \n {x_rand2} \n")
----> 9 x_rand1 = torch.rand_like(x_data)
10 print(f"Random Tensor: \n {x_rand1} \n")
RuntimeError: "check_uniform_bounds" not implemented for 'Long'
- 生成指定形状的tensor
0-1随机(均匀)填充:torch.rand(shape)
标准正态随机填充:torch.randn(shape)
全1填充:torch.ones(shape)
全0填充:torch.zeros(shape)
空tensor:torch.empty(shape)
shape是元组数据,, 用来描述张量的维数。生成时也可以指定dtype。
shape = (2,3,)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)
empty_tensor = torch.empty(shape)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor} \n")
print(f"Empty Tensor: \n {empty_tensor} \n")
rand_tensor = torch.rand(2,3)
ones_tensor = torch.ones(2,3)
zeros_tensor = torch.zeros(2,3)
empty_tensor = torch.empty(2,3)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor} \n")
print(f"Empty Tensor: \n {empty_tensor} \n")
Random Tensor:
tensor([[0.3225, 0.6561, 0.9825],
[0.1559, 0.5414, 0.1780]])
Ones Tensor:
tensor([[1., 1., 1.],
[1., 1., 1.]])
Zeros Tensor:
tensor([[0., 0., 0.],
[0., 0., 0.]])
Empty Tensor:
tensor([[6.8664e-44, 6.7262e-44, 1.1771e-43],
[6.7262e-44, 7.8473e-44, 8.1275e-44]])
Random Tensor:
tensor([[0.2564, 0.1357, 0.2352],
[0.1781, 0.2608, 0.3306]])
Ones Tensor:
tensor([[1., 1., 1.],
[1., 1., 1.]])
Zeros Tensor:
tensor([[0., 0., 0.],
[0., 0., 0.]])
Empty Tensor:
tensor([[0., 0., 0.],
[0., 0., 0.]])
x = x_data.new_full(shape,fill_value=1.2,dtype=float)
print(x_data)
print(x)
tensor([[1, 2],
[3, 4]])
tensor([[1.2000, 1.2000, 1.2000],
[1.2000, 1.2000, 1.2000]], dtype=torch.float64)
1.2 张量属性
tensor = torch.rand(3,4)
- 形状
tensor.shape
tensor.size()
print(f"Shape of tensor: {tensor.shape}")
print(f"Size of tensor: {tensor.size()}")
Shape of tensor: torch.Size([3, 4])
Size of tensor: torch.Size([3, 4])
-
类型
tensor.dtype -
设备
tensor.device cpu或gpu
判断gpu是否可用,放入gpu中运行 torch.cuda.is_available()
if torch.cuda.is_available():
tensor = tensor.to('cuda')
print(f"Device tensor is stored on: {tensor.device}")
Device tensor is stored on: cuda:0
二、张量运算
1.索引与切片
tensor[row , col]
row/col可以是一个数字,表示某行某列,标号从0开始
也可以是a:b的形式,表示[a,b)范围,如果a/b空缺则表示前/后全部范围。
tensor = torch.ones(4, 4)
print(tensor)
tensor[:,1] = 0 # 将第1列(从0开始)的数据全部赋值为0
print(tensor)
tensor[:,2:3] = 2 # 将第2列起(从0开始)到第3列前止的数据全部赋值为2
print(tensor)
tensor[3,3:] = 3 # 将第3行,从第3列开始的数据全部赋值为3
print(tensor)
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]])
tensor([[1., 0., 1., 1.],
[1., 0., 1., 1.],
[1., 0., 1., 1.],
[1., 0., 1., 1.]])
tensor([[1., 0., 2., 1.],
[1., 0., 2., 1.],
[1., 0., 2., 1.],
[1., 0., 2., 1.]])
tensor([[1., 0., 2., 1.],
[1., 0., 2., 1.],
[1., 0., 2., 1.],
[1., 0., 2., 3.]])
2.拼接与塑形
2.1 拼接
- torch.cat()
torch.cat(tensors, dim=0, *, out=None) → Tensor
连接给定维的张量的给定序列。所有张量必须要么具有相同的形状(连接维度除外),要么为空。
tensors要连接的张量,可以是元组,也可以是list
dim,拼接的维度,dim=0为上下拼,1为左右拼
x = torch.randn(2,3)
y = torch.cat((x,x,x),0)
print(y)
z = torch.cat([x,x,x],1)
print(z)
w =torch.cat((x,y,z),0)
tensor([[ 0.0650, -1.2748, -0.1397],
[-0.2959, -1.8324, 2.5596],
[ 0.0650, -1.2748, -0.1397],
[-0.2959, -1.8324, 2.5596],
[ 0.0650, -1.2748, -0.1397],
[-0.2959, -1.8324, 2.5596]])
tensor([[ 0.0650, -1.2748, -0.1397, 0.0650, -1.2748, -0.1397, 0.0650, -1.2748,
-0.1397],
[-0.2959, -1.8324, 2.5596, -0.2959, -1.8324, 2.5596, -0.2959, -1.8324,
2.5596]])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-43-72a7ca2ca93a> in <module>
4 z = torch.cat([x,x,x],1)
5 print(z)
----> 6 w =torch.cat((x,y,z),0)
RuntimeError: Sizes of tensors must match except in dimension 0. Got 3 and 9 in dimension 1 (The offending index is 2)
- torch.stack()
torch.stack(tensors, dim=0, *, out=None) → Tensor
tensors要连接的张量,可以是元组,也可以是list
dim,拼接的维度,在0和被连接张量的维度之间
x = torch.randn(2,2)
print(x)
y = torch.stack((x,x,x),0)
print(y)
z = torch.stack([x,x,x],1)
print(z)
w =torch.stack((x,x,x),2)
print(w)
w =torch.stack((x,x,x),3)
print(w)
tensor([[-0.2376, 0.6719],
[ 1.2335, -0.2181]])
tensor([[[-0.2376, 0.6719],
[ 1.2335, -0.2181]],
[[-0.2376, 0.6719],
[ 1.2335, -0.2181]],
[[-0.2376, 0.6719],
[ 1.2335, -0.2181]]])
tensor([[[-0.2376, 0.6719],
[-0.2376, 0.6719],
[-0.2376, 0.6719]],
[[ 1.2335, -0.2181],
[ 1.2335, -0.2181],
[ 1.2335, -0.2181]]])
tensor([[[-0.2376, -0.2376, -0.2376],
[ 0.6719, 0.6719, 0.6719]],
[[ 1.2335, 1.2335, 1.2335],
[-0.2181, -0.2181, -0.2181]]])
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-54-b38881fbedef> in <module>
7 w =torch.stack((x,x,x),2)
8 print(w)
----> 9 w =torch.stack((x,x,x),3)
10 print(w)
IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3)
2.2 塑形
- tensor.view()
调整tensor的形状,必须保证调整前后元素总数一致,view不修改自身的数据,返回的新的tensor与源tensor共享内存,改变一个,另一个随着改变 - tensor.resize_()
resize_(sizes, memory_format=torch.contiguous_format) → Tensor
将self张量调整为指定的大小。如果元素的数量大于当前存储空间的大小,则调整基础存储空间的大小以适应新的元素数量。如果元素的数量更小,则底层存储不会改变。现有的元素被保留,但是任何新的内存都没有初始化。 - torch.reshape(input, shape) → Tensor
x = torch.randn(4)
y = torch.reshape(x,(2,2))
print(x)
print(y)
y = torch.reshape(x,(2,3))
tensor([-0.9821, -0.2508, -0.6961, 1.1772])
tensor([[-0.9821, -0.2508],
[-0.6961, 1.1772]])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-56-003b86123340> in <module>
3 print(x)
4 print(y)
----> 5 y = torch.reshape(x,(2,3))
RuntimeError: shape '[2, 3]' is invalid for input of size 4
3.转为numpy
tensor.numpy()
改变tensor,numpy跟着改变
t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")
4.数学运算
4.1 乘法
tensor = tensor([[1., 0., 2., 1.],
[1., 0., 2., 1.],
[1., 0., 2., 1.],
[1., 0., 2., 3.]])
- 乘法(对应元素相乘)
tensor.mul(tensor) 或 tensor * tensor
# 逐个元素相乘结果
print(f"tensor.mul(tensor): \n {tensor.mul(tensor)} \n")
# 等价写法:
print(f"tensor * tensor: \n {tensor * tensor}")
tensor.mul(tensor):
tensor([[1., 0., 4., 1.],
[1., 0., 4., 1.],
[1., 0., 4., 1.],
[1., 0., 4., 9.]])
tensor * tensor:
tensor([[1., 0., 4., 1.],
[1., 0., 4., 1.],
[1., 0., 4., 1.],
[1., 0., 4., 9.]])
- 矩阵乘法
tensor.matmul(tensor.T) 或 tensor @ tensor.T
print(f"tensor.matmul(tensor.T): \n {tensor.matmul(tensor.T)} \n")
# 等价写法:
print(f"tensor @ tensor.T: \n {tensor @ tensor.T}")
tensor.matmul(tensor.T):
tensor([[ 6., 6., 6., 8.],
[ 6., 6., 6., 8.],
[ 6., 6., 6., 8.],
[ 8., 8., 8., 14.]])
tensor @ tensor.T:
tensor([[ 6., 6., 6., 8.],
[ 6., 6., 6., 8.],
[ 6., 6., 6., 8.],
[ 8., 8., 8., 14.]])
4.2 加法
torch.add(x , y, out = z) 或 x + y
x = torch.randn(4)
y = torch.randn(4)
print(x)
print(y)
print(f"torch.add(x , y):\n {torch.add(x , y)} \n")
print(f"x + y:\n {x + y} \n")
z = torch.empty(4)
torch.add(x , y, out = z)
print(z)
tensor([ 0.2084, -0.6861, -0.9305, 1.4040])
tensor([ 0.1397, 1.5095, -1.0400, 0.4442])
torch.add(x , y):
tensor([ 0.3481, 0.8233, -1.9704, 1.8483])
x + y:
tensor([ 0.3481, 0.8233, -1.9704, 1.8483])
tensor([ 0.3481, 0.8233, -1.9704, 1.8483])
5.自赋值运算
自赋值运算通常在方法后有 _ 作为后缀,例如: x.copy_(y), x.t_()操作会改变 x 的取值。
如:tensor.add_(5),在每一位上添加5
print(tensor, "\n")
tensor.add_(5)
print(tensor)
tensor([[1., 0., 2., 1.],
[1., 0., 2., 1.],
[1., 0., 2., 1.],
[1., 0., 2., 3.]])
tensor([[6., 5., 7., 6.],
[6., 5., 7., 6.],
[6., 5., 7., 6.],
[6., 5., 7., 8.]])
总结
首先理解张量Tensor这个基本概念。 tensor的运算有许多,具体可以查看文档:https://pytorch.org/docs/master/tensors.html学习资料 与 参考文献
官方文档
1.https://pytorch.apachecn.org/docs/1.7/03.html
2.https://pytorch.apachecn.org/docs/1.4/blitz/tensor_tutorial.html
3.https://pytorch.org/docs/master/generated/torch.rand_like.html
4.https://pytorch.org/docs/master/tensors.html
博客
1.https://blog.csdn.net/weixin_44054487/article/details/94380840
2.https://zhuanlan.zhihu.com/p/86984581