1.区分一下张量的几种乘法
主要是在看代码时,这个**@**的作用记录一下
numpy或者tensor中点乘使用*或者np.multiply(),
np.dot(), np.matmul(),
tensor对应为pytorch 中涉及到矩阵之间的乘法(torch.mul, *, torch.mm, torch.matmul, @)
而叉乘使用@
2.加粗样式 torch.mul
对两个张量进行逐元素乘法
a = tensor([[ 0.2824],
[-0.3715],
[ 0.9088],
[-1.7601]])
b = tensor([[-0.1806, 2.0937, 1.0406, -1.7651]])
torch.mul(a, b)
tensor([[-0.0510, 0.5912, 0.2939, -0.4985],
[ 0.0671, -0.7778, -0.3866, 0.6557],
[-0.1641, 1.9027, 0.9457, -1.6041],
[ 0.3179, -3.6851, -1.8316, 3.1069]])
torch.mul(b, a)
tensor([[-0.0510, 0.5912, 0.2939, -0.4985],
[ 0.0671, -0.7778, -0.3866, 0.6557],
[-0.1641, 1.9027, 0.9457, -1.6041],
[ 0.3179, -3.6851, -1.8316, 3.1069]])
3.torch.sum()对输入的tensor数据的某一维度求和,一共两种用法
a = tensor([[1, 1, 1],
[1, 1, 1]])
a1 = torch.sum(a)
a2 = torch.sum(a, dim=0)
a3 = torch.sum(a, dim=1)
print(a)
print(a1)
print(a2)
tensor(6.)
tensor([2., 2., 2.])
tensor([3., 3.])