都知道torch.nn.Linear的本质计算操作是线性变换,Ax = B, 但在pytorch中矩阵是以什么逻辑被计算的呢?
当输入tensor为二维矩阵时,输出为右乘权重矩阵(in_features, out_features)
当输入tensor为三维及以上矩阵时,输出为保持前面维度不变,最后两维的右乘权重矩阵
实验1
import torch
a = torch.tensor([[1, 2, 5, 3],[7, 6, 4, 8]], dtype=torch.float32)
print(a.shape)
class test(torch.nn.Module):
def __init__(self):
super(test, self).__init__()
self.lin = torch.nn.Linear(4,3)
def forward(self, x):
out = self.lin(x)
return out
m = test()
b = m.forward(a)
print(b.shape)
此时正常输出
torch.Size([2, 4])
torch.Size([2, 3])
若将a转置输出
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x2 and 4x3)
说明是右乘权重矩阵
实验2
import torch
a = torch.rand([5, 8, 6], dtype=torch.float32)
print(a.shape)
class test(torch.nn.Module):
def __init__(self):
super(test, self).__init__()
self.lin = torch.nn.Linear(6,3)
def forward(self, x):
out = self.lin(x)
return out
m = test()
b = m.forward(a)
print(b.shape)
torch.Size([5, 8, 6])
torch.Size([5, 8, 3])
说明是最后两维的右乘权重矩阵