1、L1Loss
创建一个标准来测量标准中每个元素之间的平均绝对误差(MAE)输入:math:`x`和目标:math:`y`
where :math:`N` is the batch size. If :attr:`reduction` is not ``'none'``(default ``'mean'``)
import torch.nn as nn
loss = nn.L1Loss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5)
output = loss(input, target)
output.backward()
2、NLLLoss
负对数似然损失。用C类训练分类问题很有用。
>>> m = nn.LogSoftmax(dim=1)
>>> loss = nn.NLLLoss()
>>> # input is of size N x C = 3 x 5
>>> input = torch.randn(3, 5, requires_grad=True)
>>> # each element in target has to have 0 <= value < C
>>> target = torch.tensor([1, 0, 4])
>>> output = loss(m(input), target)
>>> output.backward()
>>>
>>>
>>> # 2D loss example (used, for example, with image inputs)
>>> N, C = 5, 4
>>> loss = nn.NLLLoss()
>>> # input is of size N x C x height x width
>>> data = torch.randn(N, 16, 10, 10)
>>> conv = nn.Conv2d(16, C, (3, 3))
>>> m = nn.LogSoftmax(dim=1)
>>> # each element in target has to have 0 <= value < C
>>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C)
>>> output = loss(m(conv(data)), target)
>>> output.backward()
3、PoissonNLLLoss
目标的泊松分布具有负对数似然损失。
>>> loss = nn.PoissonNLLLoss()
>>> log_input = torch.randn(5, 2, requires_grad=True)
>>> target = torch.randn(5, 2)
>>> output = loss(log_input, target)
>>> output.backward()
4、KLDivLoss
KL散度是用于连续分布的有用距离度量,并且在对(离散采样)连续输出分布的空间进行直接回归时通常很有用。
5、MSELoss
创建一个标准来测量输入:math:`x`和目标:math:`y中每个元素之间的均方误差(L2范数平方)。
>>> loss = nn.MSELoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> output = loss(input, target)
>>> output.backward()
6、BCELoss
创建一个标准,用于测量目标和输出之间的二进制交叉熵:
>>> m = nn.Sigmoid()
>>> loss = nn.BCELoss()
>>> input = torch.randn(3, requires_grad=True)
>>> target = torch.empty(3).random_(2)
>>> output = loss(m(input), target)
>>> output.backward()
7、BCEWithLogitsLoss
这种损失将“ Sigmoid”层和“ BCELoss”组合在一个单独的类中。该版本比使用普通的“ Sigmoid”和后跟“ BCELoss”的数字更稳定,因为通过将操作合并为一层,我们利用了log-sum-exp技巧来实现数值稳定性。
>>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes,batch size = 10
>>> output = torch.full([10, 64], 0.999) # A prediction (logit)
>>> pos_weight = torch.ones([64]) # All weights are equal to 1
>>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)
>>> criterion(output, target) # -log(sigmoid(0.999))
tensor(0.3135)
>>> loss = nn.BCEWithLogitsLoss()
>>> input = torch.randn(3, requires_grad=True)
>>> target = torch.empty(3).random_(2)
>>> output = loss(input, target)
>>> output.backward()
8、HingeEmbeddingLoss
在输入张量:math:`x`和标签张量:math:`y`(包含1或-1)的情况下测量损失。
9、MultiLabelMarginLoss
创建一个标准来优化输入:math:`x`(二维迷你批处理Tensor)和输出:math:`y之间的多类多分类铰链损失(基于边际损失)目标类别索引的2D`Tensor`)。
>>> loss = nn.MultiLabelMarginLoss()
>>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]])
>>> # for target y, only consider labels 3 and 0, not after label -1
>>> y = torch.LongTensor([[3, 0, -1, 1]])
>>> loss(x, y)
>>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4)))
tensor(0.8500)
10、 SmoothL1Loss
如果绝对逐项误差低于1,则创建使用平方项的条件,否则创建L1项。它对异常值的敏感性不如“ MSELoss”,并且在某些情况下可以防止爆炸梯度(例如,参见Ross Girshick的“ Fast R-CNN”论文)。
11、SoftMarginLoss
创建一个标准,优化输入张量:math:`x`和目标张量:math:`y(包含1或-1)之间的两类分类逻辑损失。
12、CrossEntropyLoss
该标准将nn.LogSoftmax和nn.NLLLoss合并到一个类中。
>>> loss = nn.CrossEntropyLoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.empty(3, dtype=torch.long).random_(5)
>>> output = loss(input, target)
>>> output.backward()
13、MultiLabelSoftMarginLoss
创建一个标准,该标准基于x大小与(n,C)的目标:y之间的最大熵,优化多标签全对全损失。
14、CosineEmbeddingLoss
15、MarginRankingLoss
16、MultiMarginLoss
17、TripletMarginLoss
18、CTCLoss