Pytorch如何自定义Loss
设置空loss:
loss=torch.tensor(0).float().to(outs[0].device)
这个可以试试:
regression_losses.append(torch.tensor(0).float().cuda())
reg_loss=torch.stack(regression_losses).mean(dim=0, keepdim=True)
自定义MSEloss实现:
class My_loss(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, y):
return torch.mean(torch.pow((x - y), 2))
使用:
criterion = My_loss()
loss = criterion(outputs, targets)
将Loss视作单独的层,在forward函数里写明loss的计算方式,无需定义backw