pytorch学习九---损失函数

损失函数(一)

损失函数概念

损失函数是衡量模型输出与真实标签的差异

在我们讨论损失函数时,经常会出现以下概念:损失函数(Loss Function)、代价函数(Cost Function)、目标函数(Objective Function)。这三者有什么区别及联系呢?

Loss Function是计算一个样本的差异,Loss = f(\hat{y},y)

代价函数是计算整个样本集的差异的平均值:Cost = \frac{1}{N}\sum_{i}^{N}f(\hat{y_i},y_i)

目标函数是更广泛的概念,通常目标函数包括cost和regularization,obj = Cost+Regularization

pytorch中Loss:

class _Loss(Module):
    def __init__(self,size_average=None,reduce=None,reduction="mean"):
        super(_Loss,self).__init__()
        if size_average is not None or reduce is not None:
            self.reduction = Reduction.legacy_get_string(size_average,reduce)
        else:
            self.reduction = reduction

Loss函数继承了Module,相当于一个网络层,它有三个参数,其中size_average与reduce参数即将被舍弃,他们的功能可以在reduction中实现。

 

交叉熵损失函数

功能:nn.LogSoftmax()与nn.NLLLoss()结合,进行交叉熵计算

nn.CrossEntropyLoss(weight=None,size_average=None,ignore_index=-100,reduce=None,reduction="mean")

主要参数:

  • weight:各类别的loss设置权值
  • ingnore_index:忽略某个类别
  • reduction:计算模式,可为none/sum/mean
    • none:逐个元素计算
    • sum:所有元素求和,返回标量
    • mean:加权平均,返回标量

交叉熵 = 信息熵 + 相对熵

熵:用来描述一个事件的不确定性,一个事件越不确定,它的熵越大,比如明天下雨与明天太阳升起的熵大很多,因为明天是否下雨,不确定性很大,但不论明天下不下雨,太阳一定会升起。熵是自信息的一个期望。

H(P) = E_{x... p}[I(x)]=[-\sum_{i}^NP(x_i)logP(x_i)]

自信息:它是衡量单个输出、单个事件的不确定性,p(x)指一个事件的概率

I(x) = -log[p(x)]

相对熵:也称KL散度,它是用来衡量两个分布之间的差异也就是两个分布之间的距离,但它不是一个距离函数,因为两个分布之间的距离不具有对称性

D_{KL}(P,Q) = E_{x...p} [log\frac{p(x)}{Q(x)}]

其中P为真实的分布,Q为模型输出的一个分布,这里我们需要用Q去拟合、逼近P的分布,所以其不具有对称性。

交叉熵:

H(P,Q)=-\sum_{I=1}^{N}P(x_i)log Q(x_i)

D_{KL}(P,Q) =E_{x...p}[log\frac{P(x)}{Q(x)}]

                    =E_{x...p}[logP(x)-logQ(x)]

                   = \sum_{i=1}^{N}[logP(x_i)-logQ(x_i)]

                   = \sum_{I=1}^{N}P(x_i)logP(x_i)-\sum_{i=1}^{N}P(x_i)Q(x_i)

                  = H(P,Q)-H(P)

 故交叉熵为H(P,Q) = D_{KL}(P,Q) + H(P)。因此,最优化交叉熵也就是最优化相对熵,因为熵H(P)P是样本的真实分布,而Q为模型输出分布,由于训练集是固定的,所以H(P)为一个常数,做优化时可以忽略。                  

H(P,Q) =-\sum_{i=1}^{N}P(x_i)logQ(X_i)

loss(x,class) = -log(\frac{exp(x[class])}{\sum_jexp(x[j])}) = -x[class] +log(\sum_jexp(x[j]))

loss(x,class) = weight[class](-x[class]+log(\sum_jexp(x[j])))

下面用代码进行说明:

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

# fake data
inputs = torch.tensor([[1, 2], [1, 3], [1, 3]], dtype=torch.float)
target = torch.tensor([0, 1, 1], dtype=torch.long)
flag = 1
if flag:
    # def loss function
    loss_f_none = nn.CrossEntropyLoss(weight=None, reduction='none')
    loss_f_sum = nn.CrossEntropyLoss(weight=None, reduction='sum')
    loss_f_mean = nn.CrossEntropyLoss(weight=None, reduction='mean')

    # forward
    loss_none = loss_f_none(inputs, target)
    loss_sum = loss_f_sum(inputs, target)
    loss_mean = loss_f_mean(inputs, target)

    # view
    print("Cross Entropy Loss:\n ", loss_none, loss_sum, loss_mean)

上面是三种不同模式计算出的loss,第一个逐个元素计算,第二个是所有元素求和,第三个是求平均。

下面用手算的方式进行检测(只计算第一个样本):

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    idx = 0

    input_1 = inputs.detach().numpy()[idx]      # [1, 2]
    target_1 = target.numpy()[idx]              # [0]

    # 第一项
    x_class = input_1[target_1]

    # 第二项
    sigma_exp_x = np.sum(list(map(np.exp, input_1)))
    log_sigma_exp_x = np.log(sigma_exp_x)

    # 输出loss
    loss_1 = -x_class + log_sigma_exp_x

    print("第一个样本loss为: ", loss_1)

下面是weight,weight是向量形式,有多少个类别就有多少个元素,每个类别都要设置它的位置

# ----------------------------------- weight -----------------------------------
# flag = 0
flag = 1
if flag:
    # def loss function
    weights = torch.tensor([1, 2], dtype=torch.float)
    # weights = torch.tensor([0.7, 0.3], dtype=torch.float)

    loss_f_none_w = nn.CrossEntropyLoss(weight=weights, reduction='none')
    loss_f_sum = nn.CrossEntropyLoss(weight=weights, reduction='sum')
    loss_f_mean = nn.CrossEntropyLoss(weight=weights, reduction='mean')

    # forward
    loss_none_w = loss_f_none_w(inputs, target)
    loss_sum = loss_f_sum(inputs, target)
    loss_mean = loss_f_mean(inputs, target)

    # view
    print("\nweights: ", weights)
    print(loss_none_w, loss_sum, loss_mean)

上面的结果是不带weight的loss结果,下面是带有weight的loss结果;第一个类别的weight是1,所以其loss没有变化,而第二个类别的weight为2,所以其loss发生了变化。其中取平均都是加权值的,1.8210 = 1.3133+0.2539+0.2539,0.3642=1.8210/5。故带权重的均值是不再是除以样本总数,而是除以权值的份数。

下面通过手算代码理解权值的份数:

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:
    weights = torch.tensor([1, 2], dtype=torch.float)
    weights_all = np.sum(list(map(lambda x: weights.numpy()[x], target.numpy())))  # [0, 1, 1]  # [1 2 2]

    mean = 0
    loss_sep = loss_none.detach().numpy()
    for i in range(target.shape[0]):

        x_class = target.numpy()[i]
        tmp = loss_sep[i] * (weights.numpy()[x_class] / weights_all)
        mean += tmp

    print(mean)

我们通过debug模式,观测weight_all的模式:

其中

通过手算方式得到的均值也为0.3642。

pytorch中的第二个损失函数:

nn.NLLLoss(weight=None,size_average=None,ignore_index=-100,reduce=None,reduction="mean")

功能:实现负对数似然函数中的负号功能

主要参数:

  • weight:各类别的loss设置权值
  • ingnore_index:忽略某个类别
  • reduction:计算模式,可为none/sum/mean
    • none:逐个元素计算
    • sum:所有元素求和,返回标量
    • mean:加权平均,返回标量

其计算公式为:

l(x,y) = L=\left\{ l_1,l_2,...,l_N \right\} 

l_n = -w_{y_n}x_{n,y_n}

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

# fake data
inputs = torch.tensor([[1, 2], [1, 3], [1, 3]], dtype=torch.float)
target = torch.tensor([0, 1, 1], dtype=torch.long)
# ----------------------------------- 2 NLLLoss -----------------------------------
# flag = 0
flag = 1
if flag:

    weights = torch.tensor([1, 1], dtype=torch.float)

    loss_f_none_w = nn.NLLLoss(weight=weights, reduction='none')
    loss_f_sum = nn.NLLLoss(weight=weights, reduction='sum')
    loss_f_mean = nn.NLLLoss(weight=weights, reduction='mean')

    # forward
    loss_none_w = loss_f_none_w(inputs, target)
    loss_sum = loss_f_sum(inputs, target)
    loss_mean = loss_f_mean(inputs, target)

    # view
    print("\nweights: ", weights)
    print("NLL Loss", loss_none_w, loss_sum, loss_mean)

该公式只是实现了一个负号功能。

第三个损失函数:

nn.BCELoss(weight=None,size_average=None,reduce=None,reduction="mean")

功能:二分类交叉熵。注:输入值取值在[0,1]

主要参数:

  • weight:各类别的loss设置权值
  • ingnore_index:忽略某个类别
  • reduction:计算模式,可为none/sum/mean
    • none:逐个元素计算
    • sum:所有元素求和,返回标量
    • mean:加权平均,返回标量

它是交叉熵损失函数的一个特例,是二分类交叉熵损失函数,其计算公式为:

l_n = -w_n[y_n\cdot logx_n +(1-y_n)\cdot(1-x_n)]

它是每个神经元一一对应地计算loss,而不是一整个神经元去计算loss

# ----------------------------------- 3 BCE Loss -----------------------------------
# flag = 0
flag = 1
if flag:
    inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
    target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)

    target_bce = target

    # itarget
    inputs = torch.sigmoid(inputs)

    weights = torch.tensor([1, 1], dtype=torch.float)

    loss_f_none_w = nn.BCELoss(weight=weights, reduction='none')
    loss_f_sum = nn.BCELoss(weight=weights, reduction='sum')
    loss_f_mean = nn.BCELoss(weight=weights, reduction='mean')

    # forward
    loss_none_w = loss_f_none_w(inputs, target_bce)
    loss_sum = loss_f_sum(inputs, target_bce)
    loss_mean = loss_f_mean(inputs, target_bce)

    # view
    print("\nweights: ", weights)
    print("BCE Loss", loss_none_w, loss_sum, loss_mean)

上诉报错是说输入必须是在[0,1]之间,而我们输入中有大于1的数,因此我们必须对输入进行sigmoid处理

我们看到有四个样本,每个样本有两个神经元,因此有8个loss,也就如之前所说每个神经元一一地计算其loss,然后对所有loss求和,求平均。

下面通过手段的模式计算第一个神经元的loss

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    idx = 0

    x_i = inputs.detach().numpy()[idx, idx]
    y_i = target.numpy()[idx, idx]              #

    # loss
    # l_i = -[ y_i * np.log(x_i) + (1-y_i) * np.log(1-y_i) ]      # np.log(0) = nan
    l_i = -y_i * np.log(x_i) if y_i else -(1-y_i) * np.log(1-x_i)

    # 输出loss
    print("BCE inputs: ", inputs)
    print("第一个loss为: ", l_i)

第四个损失函数:

nn.BCEWithLogitsLoss(weight=None,size_average=None,reduce=None,reduction="mean",pos_weight=None)

功能:结合sigmoid与二分类交叉熵。注:网络最后不加sigmoid函数

主要参数:

  • pos_weight:正样本的权值
  • weight:各类别的loss设置权值
  • ingnore_index:忽略某个类别
  • reduction:计算模式,可为none/sum/mean
    • none:逐个元素计算
    • sum:所有元素求和,返回标量
    • mean:加权平均,返回标量

其计算公式为:

l_n = -w_n[y_n*log\sigma(x_n)+(1-y_n)log(1-\sigma(x_n))],其中\sigma为sigmoid函数。

# ----------------------------------- 4 BCE with Logis Loss -----------------------------------
# flag = 0
flag = 1
if flag:
    inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
    target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)

    target_bce = target

    # inputs = torch.sigmoid(inputs)

    weights = torch.tensor([1, 1], dtype=torch.float)

    loss_f_none_w = nn.BCEWithLogitsLoss(weight=weights, reduction='none')
    loss_f_sum = nn.BCEWithLogitsLoss(weight=weights, reduction='sum')
    loss_f_mean = nn.BCEWithLogitsLoss(weight=weights, reduction='mean')

    # forward
    loss_none_w = loss_f_none_w(inputs, target_bce)
    loss_sum = loss_f_sum(inputs, target_bce)
    loss_mean = loss_f_mean(inputs, target_bce)

    # view
    print("\nweights: ", weights)
    print(loss_none_w, loss_sum, loss_mean)

若对输入再进行一个sigmoid处理后,结果为:

从上可以看出,loss缩小了,出现了偏差。

下面是加入了pos_weight参数:

# --------------------------------- pos weight

# flag = 0
flag = 1
if flag:
    inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
    target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)

    target_bce = target

    # itarget
    # inputs = torch.sigmoid(inputs)

    weights = torch.tensor([1], dtype=torch.float)
    pos_w = torch.tensor([3], dtype=torch.float)        # 3

    loss_f_none_w = nn.BCEWithLogitsLoss(weight=weights, reduction='none', pos_weight=pos_w)
    loss_f_sum = nn.BCEWithLogitsLoss(weight=weights, reduction='sum', pos_weight=pos_w)
    loss_f_mean = nn.BCEWithLogitsLoss(weight=weights, reduction='mean', pos_weight=pos_w)

    # forward
    loss_none_w = loss_f_none_w(inputs, target_bce)
    loss_sum = loss_f_sum(inputs, target_bce)
    loss_mean = loss_f_mean(inputs, target_bce)

    # view
    print("\npos_weights: ", pos_w)
    print(loss_none_w, loss_sum, loss_mean)

对正样本乘以pos_weight。

 

5. nn.L1Loss(size_average=None,reduce=None,reduction="mean")

功能:计算inputs与target之差的绝对值

l = |x_n - y_n|

6. nn.MSELoss(size_average=None,reduce=None,reduction="mean")

功能:计算inputs与target之差的平方

l = (x_n-y_n)^2

主要参数:

  • reduction:计算模式,可为none/sum/mean
    • none:逐个元素计算
    • sum:所有元素求和,返回标量
    • mean:加权平均,返回标量
import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy as np
from tools.common_tools import set_seed

set_seed(1)  # 设置随机种子

# ------------------------------------------------- 5 L1 loss ----------------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.ones((2, 2))
    target = torch.ones((2, 2)) * 3

    loss_f = nn.L1Loss(reduction='none')
    loss = loss_f(inputs, target)

    print("input:{}\ntarget:{}\nL1 loss:{}".format(inputs, target, loss))# ------------------------------------------------- 6 MSE loss ----------------------------------------------

    loss_f_mse = nn.MSELoss(reduction='none')
    loss_mse = loss_f_mse(inputs, target)

    print("MSE loss:{}".format(loss_mse))

7. SmoothL1Loss(size_average=None,reduce=None,reduction="mean")

功能:平滑L1Loss

loss(x,y) = \frac{1}{n}\sum_iz_i

 

主要参数:

  • reduction:计算模式,可为none/sum/mean
    • none:逐个元素计算
    • sum:所有元素求和,返回标量
    • mean:加权平均,返回标量
# ------------------------------------------------- 7 Smooth L1 loss ----------------------------------------------
# flag = 0
flag = 1
if flag:
    inputs = torch.linspace(-3, 3, steps=500)
    target = torch.zeros_like(inputs)

    loss_f = nn.SmoothL1Loss(reduction='none')

    loss_smooth = loss_f(inputs, target)

    loss_l1 = np.abs(inputs.numpy())

    plt.plot(inputs.numpy(), loss_smooth.numpy(), label='Smooth L1 Loss')
    plt.plot(inputs.numpy(), loss_l1, label='L1 loss')
    plt.xlabel('x_i - y_i')
    plt.ylabel('loss value')
    plt.legend()
    plt.grid()
    plt.show()

8. PoissonNLLLoss(log_input=True,full=False,size_average=None,eps=1e-8,reduce=None,reduction="mean")

功能:泊松分布的负对数似然损失函数

log_input = True ,loss(input,target) = exp(input) - target*input

log_input = False, loss(input,target) = input - target*log(input+eps)

主要参数:

  • log_input:输入是否为对数形式,决定计算公式
  • full:计算所有loss,默认为False
  • eps:修正项,避免log(input)为nan
# ------------------------------------------------- 8 Poisson NLL Loss ----------------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.randn((2, 2))
    target = torch.randn((2, 2))

    loss_f = nn.PoissonNLLLoss(log_input=True, full=False, reduction='none')
    loss = loss_f(inputs, target)
    print("input:{}\ntarget:{}\nPoisson NLL loss:{}".format(inputs, target, loss))

下面通过手动计算第一个神经元的loss:

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    idx = 0

    loss_1 = torch.exp(inputs[idx, idx]) - target[idx, idx]*inputs[idx, idx]

    print("第一个元素loss:", loss_1)

9. nn.KLDivLoss(size_average=None,reduce=None,reduction="mean")

功能:计算KLD(divergence),KL散度,相对熵

注:需提前将输入计算log-probabilities,如通过nn.logsoftmax()

D_{KL}(P||Q) = E_{x...p}[log\frac{P(x)}{Q(x)}]= E_{x...p}[logP(x)-logQ(x)]

                    = \sum_{i=1}^{N}P(x_i)(logP(x_i)-logQ(x_i))

l_n = y_n *(logy_n - x_n),这是对一个样本计算loss

主要参数:

  • reduction:计算模式,可为none/sum/mean/batchmean
    • batchmean:batchsize维度求平均值
    • mean:加权平均,返回标量
    • sum:所有元素求和,返回标量
    • none:逐个元素计算
# ------------------------------ 9 KL Divergence Loss --------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[0.5, 0.3, 0.2], [0.2, 0.3, 0.5]])
    inputs_log = torch.log(inputs)
    target = torch.tensor([[0.9, 0.05, 0.05], [0.1, 0.7, 0.2]], dtype=torch.float)

    loss_f_none = nn.KLDivLoss(reduction='none')
    loss_f_mean = nn.KLDivLoss(reduction='mean')
    loss_f_bs_mean = nn.KLDivLoss(reduction='batchmean')

    loss_none = loss_f_none(inputs, target)
    loss_mean = loss_f_mean(inputs, target)
    loss_bs_mean = loss_f_bs_mean(inputs, target)

    print("loss_none:\n{}\nloss_mean:\n{}\nloss_bs_mean:\n{}".format(loss_none, loss_mean, loss_bs_mean))

上面loss_mean是所有loss求和后,除以6,而batchmean则是所以loss求和后除以2。下面通过手动计算第一个神经元的loss进行验证:

# --------------------------------- compute by hand------------------
# flag = 0
flag = 1
if flag:

    idx = 0

    loss_1 = target[idx, idx] * (torch.log(target[idx, idx]) - inputs[idx, idx])

    print("第一个元素loss:", loss_1)

10. nn.MarginRankingLoss(margin=0.0,size_average=None,reduce=None,reduction="mean")

功能:计算两个向量之间的相似度,用于排序任务

特别说明:该方法计算两组数据之间的差异,返回一个n*n的loss矩阵

 

主要参数:

  • margin:边界值,x1与x2之间的差异值
  • reduction:计算模式,可为none/sum/mean

loss = max(0,-y*(x_1-x_2)+margin)

  • y = 1时,希望x1比x2大,当x1>x2时,不产生loss
  • y = -1时,希望x2比x1大,当x2>x1时,不产生loss
# ------------------------------ 10 Margin Ranking Loss -----------------------------------
# flag = 0
flag = 1
if flag:

    x1 = torch.tensor([[1], [2], [3]], dtype=torch.float)
    x2 = torch.tensor([[2], [2], [2]], dtype=torch.float)

    target = torch.tensor([1, 1, -1], dtype=torch.float)

    loss_f_none = nn.MarginRankingLoss(margin=0, reduction='none')

    loss = loss_f_none(x1, x2, target)

    print(loss)

11. nn.MultiLabelMarginLoss(size_average=None,reduce=NOne,reduction="mean")

功能:多标签边界损失函数

举例:时分类任务,样本x属于0类和3类,标签:[0,3,-1,-1],不是[1,0,0,1]

主要参数:

  • reduction:计算模式,可为none/sum/mean

loss(x,y) = \sum_{ij}\frac{max(0,1-(x[y[j]]-x[i]))}{x.size(0)} ;分母是神经元的个数,分子求max(标签所在的神经元减去标签非在的神经元)

where i == 0  to  x.size(0),  j == 0  to  y.size(0) ,  y[j]>= 0,  and i 不等于 y[j] for all i and j .

# ---------------------------------------------- 11 Multi Label Margin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.4, 0.8]])
    y = torch.tensor([[0, 3, -1, -1]], dtype=torch.long)

    loss_f = nn.MultiLabelMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print(loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    x = x[0]
    item_1 = (1-(x[0] - x[1])) + (1 - (x[0] - x[2]))    # [0]
    item_2 = (1-(x[3] - x[1])) + (1 - (x[3] - x[2]))    # [3]

    loss_h = (item_1 + item_2) / x.shape[0]

    print(loss_h)

12. nn.SoftMarginLoss(size_average=None,reduce=None,reduction="mean")

功能:计算二分类的logistic损失

主要参数:

  • reduction:计算模式,可为none/sum/mean

loss(x,y) = \sum_i \frac{log(1+exp(-y[i]*x[i]))}{x.nelement()}

# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[0.3, 0.7], [0.5, 0.5]])
    target = torch.tensor([[-1, 1], [1, -1]], dtype=torch.float)

    loss_f = nn.SoftMarginLoss(reduction='none')

    loss = loss_f(inputs, target)

    print("SoftMargin: ", loss)
# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    idx = 0

    inputs_i = inputs[idx, idx]
    target_i = target[idx, idx]

    loss_h = np.log(1 + np.exp(-target_i * inputs_i))

    print(loss_h)

13. nn.MultiLabelSoftMarginLoss(weight=None,size_average=None,reduce=None,reduction="mean")

功能:SoftMarginLoss多标签版本

主要参数:

  • weight:各类别的loss设置权值
  • reduction:计算模式,可为none/sum/mean

loss(x,y) = -\frac{1}{C}*\sum_iy[i]*log((1+exp(-x[i]))^{-1})+(1-y[i])*log(\frac{exp{-x[i]}}{(1+exp(-x[i]))})

C:标签的数量

# ---------------------------------------------- 13 MultiLabel SoftMargin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[0.3, 0.7, 0.8]])
    target = torch.tensor([[0, 1, 1]], dtype=torch.float)

    loss_f = nn.MultiLabelSoftMarginLoss(reduction='none')

    loss = loss_f(inputs, target)

    print("MultiLabel SoftMargin: ", loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    i_0 = torch.log(torch.exp(-inputs[0, 0]) / (1 + torch.exp(-inputs[0, 0])))

    i_1 = torch.log(1 / (1 + torch.exp(-inputs[0, 1])))
    i_2 = torch.log(1 / (1 + torch.exp(-inputs[0, 2])))

    loss_h = (i_0 + i_1 + i_2) / -3

    print(loss_h)

14. nn.MultiMarginLoss(p=1,margin=1.0,weight=None,size_average=None,reduce=None,reduction="mean")

功能:计算多分类的折页损失

主要参数:

  • p:可选1或2
  • weight:各类别的loss设置权值
  • margin:边界值
  • reduction:计算模式,可为none/sum/mean

loss(x,y) = \frac{\sum_imax(0,margin-x[y]+x[i])^p}{x.size(0)}

where x\in{0,...,x.size(0)-1} ,  y\in{0,...,y.size(0)-1},0=< y[j] =< x.size(0)-1, and i 不等于 y[j] for all i and j.

# ---------------------------------------------- 14 Multi Margin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.7], [0.2, 0.5, 0.3]])
    y = torch.tensor([1, 2], dtype=torch.long)

    loss_f = nn.MultiMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print("Multi Margin Loss: ", loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    x = x[0]
    margin = 1

    i_0 = margin - (x[1] - x[0])
    # i_1 = margin - (x[1] - x[1])
    i_2 = margin - (x[1] - x[2])

    loss_h = (i_0 + i_2) / x.shape[0]

    print(loss_h)

15. nn.TripletMarginLoss(margin=1.0,p=2.0,eps=1e-06,swap=False,size_average=None,reduce=None,reduction="mean")

功能:计算三元组损失,人脸验证中常用

主要参数:

p:范数的阶,默认为2

margin:边界值

reduction:计算模式,可为none/sum/mean

L(a,p,n) = max\left\{d(a_i,p_i)-d(a_i,n_i)+margin, 0 \right\}

d(x_i,y_i) = || x_i - y_i||_p

# ---------------------------------------------- 15 Triplet Margin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    anchor = torch.tensor([[1.]])
    pos = torch.tensor([[2.]])
    neg = torch.tensor([[0.5]])

    loss_f = nn.TripletMarginLoss(margin=1.0, p=1)

    loss = loss_f(anchor, pos, neg)

    print("Triplet Margin Loss", loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    margin = 1
    a, p, n = anchor[0], pos[0], neg[0]

    d_ap = torch.abs(a-p)
    d_an = torch.abs(a-n)

    loss = d_ap - d_an + margin

    print(loss)

16. nn.HingeEmbeddingLoss(margin=1.0,size_average=None,reduce=None,reduction="mean")

功能:计算两个输入的相似性,常用于非线性embedding和半监督学习

特别注意:输入x应为两个输入之差的绝对值

主要参数:

  • margin:边界值
  • reduction:计算模式,可为none/sum/mean

# ---------------------------------------------- 16 Hinge Embedding Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[1., 0.8, 0.5]])
    target = torch.tensor([[1, 1, -1]])

    loss_f = nn.HingeEmbeddingLoss(margin=1, reduction='none')

    loss = loss_f(inputs, target)

    print("Hinge Embedding Loss", loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:
    margin = 1.
    loss = max(0, margin - inputs.numpy()[0, 2])

    print(loss)

17. nn.CosineEmbeddingLoss(margin=0.0,size_average=None,reduce=None,reduction="mean")

功能:采用余弦相似度计算两个输入的相似性

主要参数:

  • margin:可取值[-1,1],推荐为[0,0.5]
  • reduction:计算模式,可为none/sum/mean

# ---------------------------------------------- 17 Cosine Embedding Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    x1 = torch.tensor([[0.3, 0.5, 0.7], [0.3, 0.5, 0.7]])
    x2 = torch.tensor([[0.1, 0.3, 0.5], [0.1, 0.3, 0.5]])

    target = torch.tensor([[1, -1]], dtype=torch.float)

    loss_f = nn.CosineEmbeddingLoss(margin=0., reduction='none')

    loss = loss_f(x1, x2, target)

    print("Cosine Embedding Loss", loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:
    margin = 0.

    def cosine(a, b):
        numerator = torch.dot(a, b)
        denominator = torch.norm(a, 2) * torch.norm(b, 2)
        return float(numerator/denominator)

    l_1 = 1 - (cosine(x1[0], x2[0]))

    l_2 = max(0, cosine(x1[0], x2[0]))

    print(l_1, l_2)

18. nn.CTCLoss(blank=0,reduction="mean",zero_infinity=False)

功能:计算CTC损失,解决时序类数据的分类  Connectionist Temporal Classification

主要参数:

  • blank:blank label
  • zero_infinity:无穷大的值或梯度值0
  • reduction:计算模式,可为None/sum/mean

# ---------------------------------------------- 18 CTC Loss -----------------------------------------
# flag = 0
flag = 1
if flag:
    T = 50      # Input sequence length
    C = 20      # Number of classes (including blank)
    N = 16      # Batch size
    S = 30      # Target sequence length of longest target in batch
    S_min = 10  # Minimum target length, for demonstration purposes

    # Initialize random batch of input vectors, for *size = (T,N,C)
    inputs = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()

    # Initialize random batch of targets (0 = blank, 1:C = classes)
    target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)

    input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
    target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)

    ctc_loss = nn.CTCLoss()
    loss = ctc_loss(inputs, target, input_lengths, target_lengths)

    print("CTC loss: ", loss)

 

18种损失函数

 

 

 

 

 

 

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值