Loss:计算实际输出和目标之间的差距,为我们更新输出提供一定的依据(反向传播),grad
L1Loss
import torch
from torch.nn import L1Loss
inputs = torch.tensor([1, 2, 3], dtype=torch.float32)
targets = torch.tensor([1, 2, 5], dtype=torch.float32)
inputs = torch.reshape(inputs, (1, 1, 1, 3))
targets = torch.reshape(targets, (1, 1, 1, 3))
loss = L1Loss()
result = loss(inputs, targets)
print(result)
MSELoss
import torch
from torch import nn
from torch.nn import L1Loss, MSELoss
inputs = torch.tensor([1, 2, 3], dtype=torch.float32)
targets = torch.tensor([1, 2, 5], dtype=torch.float32)
inputs = torch.reshape(inputs, (1, 1, 1, 3))
targets = torch.reshape(targets, (1, 1, 1, 3))
loss_mse = nn.MSELoss()
result_mse = loss_mse(inputs, targets)
print(result_mse)
CrossEntropyLoss
import torch
from torch import nn
from torch.nn import L1Loss, MSELoss
x = torch.tensor([0.1, 0.2, 0.3])
y = torch.tensor([1])
# 1batch_size,3类
x = torch.reshape(x, (1, 3))
loss_cross = nn.CrossEntropyLoss()
result_cross = loss_cross(x, y)
print(result_cross)
反向传播
计算参数的梯度,之后选合适的优化器通过梯度对参数进行优化,降低loss
result_loss = loss(outputs, targets)
result_loss.backward()