1.数据操作(详细见数p40):
arange:
x = torch.arange(12)
print(x)
输出结果:
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
shape:
x = torch.arange(12)
x_shape = x.shape
print(x_shape)
x = torch.tensor([[1,2,7],
[2,3,8]])
print(x_shape)
输出结果:
torch.Size([12])
torch.Size([2,2])
numel:
x = torch.tensor([[1,2,7],
[2,3,8]])
x_nums = x.numel()
print(x_nums)
输出结果:
6
reshape:
x = torch.arange(12)
x_reshape = x.reshape([3,4])
print(x_reshape)
输出结果:
tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
# 可以([3,-1])、([-1,4])来替代,他会自动计算行、列
zeros、ones、randn:
x = torch.zeros([3,5])
print(x)
x = torch.ones([3,5])
print(x)
x = torch.randn([3,5])
print(x)
输出结果:
tensor([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]])
tensor([[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]])
tensor([[ 0.2923, -2.0526, 1.9701, 0.1231, 0.8780],
[ 0.7613, -1.0972, -1.0927, -0.4669, -0.3353],
[-0.9289, 0.6058, -0.8455, 2.6522, -1.6585]])
tensor:
x = torch.tensor([[1,2,7],[2,3,8]])
输出结果:
tensor([[1,2,7],
[2,3,8]])
2.运算符(详细见数p42):
矩阵之间的计算:
x = torch.tensor([1.0, 2, 4, 8])
y = torch.tensor([2, 2, 2, 2])
x + y, x - y, x * y, x / y, x ** y # **运算符是求幂运算
输出结果:
(tensor([ 3., 4., 6., 10.]),
tensor([-1., 0., 2., 6.]),
tensor([ 2., 4., 8., 16.]),
tensor([0.5000, 1.0000, 2.0000, 4.0000]),
tensor([ 1., 4., 16., 64.]))
exp():
x = torch.tensor([[1,2,3],
[4,5,6]])
x = torch.exp(x)
print(x)
输出结果:
tensor([[ 2.7183, 7.3891, 20.0855],
[ 54.5981, 148.4132, 403.4288]])
张量进行拼接:
x = torch.arange(12,dtype=torch.float32).reshape([3,-1])
y = torch.tensor([[1.0,1,1,1],
[2,2,2,2],
[3,3,3,3]])
cat_1 = torch.cat((x,y),dim=0) # dim=0,纵向拼接
cat_2 = torch.cat((x,y),dim=1) # dim=1,横向拼接
print(cat_1)
print(cat_2)
输出结果:
tensor([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[ 1., 1., 1., 1.],
[ 2., 2., 2., 2.],
[ 3., 3., 3., 3.]])
tensor([[ 0., 1., 2., 3., 1., 1., 1., 1.],
[ 4., 5., 6., 7., 2., 2., 2., 2.],
[ 8., 9., 10., 11., 3., 3., 3., 3.]])
判断两个张量相对位置的元素是否相等:
x = torch.arange(12,dtype=torch.float32).reshape([3,-1])
y = torch.tensor([[1.0,1,1,1],
[2,2,2,2],
[3,3,3,3]])
print(x==y)
输出结果:
tensor([[False, True, False, False],
[False, False, False, False],
[False, False, False, False]])
对所有元素求和,输出一个单元素张量:
x = torch.arange(12,dtype=torch.float32).reshape([3,-1])
print(x.sum())
输出结果:
tensor(66.)
3.广播机制(p44):
a和b张量的形状不同,a+b首先对其进行适当的复制,再进行求和
a = torch.arange(3).reshape((3, 1))
b = torch.arange(2).reshape((1, 2))
a, b
输出结果:
(tensor([[0],
[1],
[2]]),
tensor([[0, 1]]))
a + b
输出结果:
tensor([[0, 1],
[1, 2],
[2, 3]])
4.索引和切片(p45):
x = torch.arange(12)
print(x[-1])
x = torch.arange(12).reshape([3,-1])
print(x)
print(x[-1]) # 当x为一行时,访问最后一个元素;x为多维矩阵时,返回最后一行
print(x[-1,-1])
print(x[:,-1])
print(x[0:2,2]) # 1,2行的第3列元素
print(x[0:2]) # x为多维矩阵时,[0:2]相当于[0:2,:]
输出结果:
tensor(11)
tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
tensor([ 8, 9, 10, 11])
tensor(11)
tensor([ 3, 7, 11])
tensor([2, 6])
tensor([[0, 1, 2, 3],
[4, 5, 6, 7]])
5.节省内存(p46):
x = [1,2,3]
y = [2,3,4]
before = id(x)
x += y # 此处不是赋值操作(x=x+y),而是把y加在x后面,因此原地址没变
# x[:]=x+y
print(x)
print(id(x)==before)
运行结果:
[1, 2, 3, 2, 3, 4]
True
6.转化为其他Python对象:
A = X.numpy()
B = torch.tensor(A)
type(A), type(B)
输出结果:
(numpy.ndarray, torch.Tensor)
a = torch.tensor([3.5])
a, a.item(), float(a), int(a)
输出结果:
(tensor([3.5000]), 3.5, 3.5, 3)