转置:
A.T
对称矩阵判断:
A == A.T
三维向量:
x = torch.arange(24).reshape(2,3,4)
print(x)
输出结果:
tensor([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
降维:
# A.hape = [2,5,4]
# axis0 A.shape = [5,4]
# axis1 A.shape = [2,4]
# axis1,2 A.shape = [2]
# keepdims=True可以防止降维
A = torch.arange(12).reshape(3,4)
print(A)
A_sum_axis0 = A.sum(axis=0) # axis=0,按行降维
print(A_sum_axis0)
A_sum_axis1 = A.sum(axis=1) # axis=1,按列降维
print(A_sum_axis1)
A.sum(axis=[0, 1]) # 结果和A.sum()相同
输出结果:
tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
tensor([12, 15, 18, 21])
tensor([ 6, 22, 38])
均值:
A.mean(), A.sum() / A.numel()
A.mean(axis=0), A.sum(axis=0) / A.shape[0] # 求每一列的均值
输出结果:
(tensor(9.5000), tensor(9.5000))
(tensor([ 8., 9., 10., 11.]), tensor([ 8., 9., 10., 11.]))
非降维求和(可通过广播机制求A除以A_sum):
sum_A = A.sum(axis=1, keepdims=True)
print(sum_A)
print(A / sum_)
输出结果:
tensor([[ 6.],
[22.],
[38.],
[54.],
[70.]])
tensor([[0.0000, 0.1667, 0.3333, 0.5000],
[0.1818, 0.2273, 0.2727, 0.3182],
[0.2105, 0.2368, 0.2632, 0.2895],
[0.2222, 0.2407, 0.2593, 0.2778],
[0.2286, 0.2429, 0.2571, 0.2714]])
某个轴计算A累计总和:
A = torch.arange(12).reshape(3,4)
print(A)
print(A.cumsum(axis=0))
输出结果:
tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
tensor([[ 0, 1, 2, 3],
[ 4, 6, 8, 10],
[12, 15, 18, 21]])
点积:
x = arange(4)
y = torch.ones(4, dtype = torch.float32)
x, y, torch.dot(x, y) # 只能求两个向量乘法
torch.sum(x * y)
输出结果:
(tensor([0., 1., 2., 3.]), tensor([1., 1., 1., 1.]), tensor(6.))
tensor(6.)
线性表出AX = B:
A = torch.arange(12).reshape(3,4)
x = torch.arange(4)
print(A,x)
B = torch.mv(A,x) # m表示矩阵,v表示向量
print(B)
输出结果:
tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]]) tensor([0, 1, 2, 3])
tensor([14, 38, 62])
矩阵乘法:
B = torch.ones(4, 3)
torch.mm(A, B)
范数:
u = torch.tensor([3.0, -4.0])
torch.norm(u) # 输入为浮点数,L2范数
torch.abs(u).sum() # L1范数
# torch.norm(u,2)
输出结果:
tensor(5.)
tensor(7.)
# Frobenius范数
torch.norm(torch.ones((4, 9))) # 矩阵各元素平方和开根号
输出结果:
tensor(6.)