pytorch(16)损失函数(二)

5和6是在数据回归中用的较多的损失函数

5. nn.L1Loss

功能:计算inputs与target之差的绝对值
代码:

nn.L1Loss(reduction='mean')

公式:

\[l_n = |x_n-y_n| \]

6. nn.MSELoss

功能:计算inputs与target之差的平方
代码:

nn.MSELoss(reduction='mean')

主要参数:reduction:计算模式,none/sum/mean
公式:

\[l_n = (x_n - y_n)^2 \]

7. SmoothL1Loss

功能:平滑的L1Loss
代码:

nn.SmoothL1Loss(size_average=None,reduce=None,reduction='mean')
$$f(x)=
\begin{cases}
0& \text{x=0}\\
1& \text{x!=0}
\end{cases}$$
\[loss(x,y)=\frac{1}{n}\sum_{i}{z_i}\\ z_i = \begin{cases} 0.5(x_i - y_i)^2& \text{,if|x_i - y_i|}<1\\ |x_i-y_i|-0.5& \text{,otherwise} \end{cases} \]

8. PoissonNLLLoss

功能:泊松分布的负对数似然损失函数
主要参数:

  • log_input:输入是否为对数形式,决定计算公式
  • full:计算所有Loss,默认为False
  • eps:修正项,避免log(input)为nan
    代码:
nn.PoissonNLLLoss(log_input=True,full=False,eps=1e-08,reduction='mean') 

log_input = True
loss(input,target)=exp(input)-target*input

log_input = False
loss(input,target)=input-target*log(input+eps)

9. nn.KLDivLoss

功能:计算KLD(divergence),KL散度,相对熵
注意事项:需提前将输入计算log-probabilities,如通过nn.logsoftmax(),也就是输入一个概率,概率取值的区间是(0,1)。因为在公式中有Log,因此要求我们在输入中就给log函数进行运算。
主要参数:
-reduce:none,sum,mean,batchmean.
batchmean-batchsize维度求平均值

nn.KLDivLoss(reduction='mean')
\[D_{KL}(P||Q)=E_{x\sim p}\Big[log \frac{P(x)}{Q(x)} \Big]=E_{x\sim p}[log P(x)-log Q(x)]\\ =\sum_{i=1}^{N}P(x_i)(logP(x_i)-logQ(x_i)) \]
\[l_n = y_n(log_{yn}-x_n) \]

10. nn.MarginRankingLoss

功能:计算两个向量之间的相似度,用于排序任务
特别说明:该方法计算两组数据之间的差异,返回一个n*n的loss的矩阵
主要参数:

  • margin:边界值,x1与x2之间的差异值
  • reduction:计算模式,可为none/sum/mean
    y=1时,希望x1比x2大,当x1>x2时,不产生Loss
    y=-1时,希望x2比x1大,当x2>x1时,不产生Loss
nn.MarginRankingLoss(margin=0.0,size_average=None,reduce=None,reduction='mean')
\[loss(x,y)=max(0,-y*(x1-x2)+margin) \]

11. nn.MultiLabelMarginLoss

功能:多标签边界损失函数。
比如一个样本有多个类别。
距离:四分类任务,样本x属于0类和3类
标签:[0,3-1,-1],而不是[1,0,0,1]
主要参数:

  • reduction:计算模式,可为none/sum/mean
nn.MultiLabelMarginLoss(size_average=None,reduce=None,reduction='mean')
\[loss(x,y) = \sum_{ij}\frac{max(0,1-(x[y[i]])-x[i])}{x.size(0)}\\ where\ i == 0 \to x.size(0),j==0 \to y.size(0),y[j]\geq 0 ,and\ i \neq y[j] for\ all\ i\ and\ j. \]

分母是x的大小,是输出向量的神经元的个数。

12. nn.SoftMarginLoss

功能:计算二分类的logistic损失
主要参数

  • reduction:计算模式,可为none,sum,mean
nn.SoftMarginLoss(size_average=None,reduce=None,reduction='mean')
\[loss(x,y)=\sum_{i}\frac{log(1+exp(-y[i]*x[i]))}{x.nelement()} \]

13. nn.MultiLabelSoftMarginLoss

功能:SoftMarginLoss多标签版本
主要参数:这里的标签是01。

  • weight:各类别的Loss设置权值
  • reduction:计算模式,可为none,sum,mean
nn.MultiLabelSoftMarginLoss(weight=None,size_average=None,reduce=None,reduction='mean')
\[loss(x,y)=-\frac{1}{C}*\sum_i y[i]*log((1+exp(-x[i]))^{-1}) +(1-y[i])*log\Big(\frac{exp(-x[i])}{(1+exp(-x[i]))} \Big) \]

14. nn.MultiMarginLoss

功能:计算多分类的折页损失
主要参数:

  • p:可选1或2
  • weight:各类别的Loss设置权值
  • margin:边界值
  • reduction:计算模式,可为none,sum,mean
nn.MultiMarginLoss(p=1,margin=1.0,weight=None,size_average=None,reduce=None,reduction='mean')
\[loss(x,y)=\frac{\sum_(i)max(0,margin-x[y]+x[i])^p}{x.size(0)}\\ where\ x \in \{0,...,x.size(0)-1 \},y\in \{0,...,y.size(0)-1\},0\leq y[j] \leq x.size(0)-1,and\ i\neq y[j]\ for \ all\ i\ and\ i. \]

15. nn.TripletMarginLoss

功能:计算三元组损失,人脸验证中常用
主要参数:

  • p :范数的阶,默认为2
  • margin:边界值
  • reduction:计算模式,none,sum,mean
nn.TripletMarginLoss(margin=1.0,p=2.0,eps=1e-06,swap=False,size_average=None,reduce=None,reduction='mean')
\[L(a,p,n)=max\{d(a_i,p_i)-d(a_i,n_i)+margin,0 \} \\ d(x_i,y_i)=||x_i-y_i||_p \]

16. nn.HingeEmbeddingLoss

功能:计算两个输入的相似性,常用于非线性embedding和半监督学习
特别注意:输入x应为两个输入之差的绝对值
主要参数:

  • margin:边界值
  • reduction:计算模式,可为none,sum,mean
nn.HingeEmbeddingLoss(margin=1.0,size_average=None,reduce=None,reduction='mean')
\[l_n=\begin{cases} x_n& \text{,if y_n=1,}\\ max\{0,\Delta -x_n\}& \text{,if y_n=-1}\end{cases} \]

17. nn.CosineEmbeddingLoss

功能:采用余弦相似度计算两个输入的相似性
主要参数:

  • margin:可取值[-1,1],推荐为[0,0.5]
  • reduction:计算模式,可为none,sum,mean
nn.CosineEmbeddingLoss(margin=0.0,size_average=None,reduce=None,reduction='mean')
\[loss(x,y)=\begin{cases} 1-cos(x_1,x_2)& \text{if, y=1}\\ max(0,cos(x_1,x_2)-margin)& \text{if, y=-1}\end{cases} \]
\[cos(\theta)=\frac{A \cdot B}{||A||||b||}=\frac{\sum_{i=1}^{n}A_i\times B_i}{\sqrt{\sum_{i=1}^{n}(A_i)^2}\times \sqrt{\sum_{i=1}^{n}(B_i)^2}} \]

18. nn.CTCLoss

功能:计算CTC损失,解决时序类数据的分类
Connetionist Temporal Classification
主要参数:

  • blank:blank label
  • zero_infinity:无穷大的值或梯度置0
  • reduction:计算模式,可为none,sum,mean
torch.nn.CTCLoss(blank=0,reduction='mean',zero_infinity=False)
import torch
import torch.nn as nn
import os
import matplotlib.pyplot as plt
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
torch.manual_seed(2)
# ===========================nn.L1Loss===========
# flag = True
flag = False
if flag:
    x1_data = torch.ones((2, 2))
    x1_label = torch.ones((2, 2)) * 3

    loss1 = nn.L1Loss(reduction='none')
    loss_data = loss1(x1_data, x1_label)
    print("x1_data:{}\nx1_label:{}\nloss:{}".format(x1_data,x1_label,loss_data))

# ===========================nn.MSELoss===========
# flag = True
flag = False
if flag:
    x1_data = torch.ones((2, 2))
    x1_label = torch.ones((2, 2)) * -1

    loss1 = nn.MSELoss(reduction='none')
    loss_data = loss1(x1_data, x1_label)
    print("x1_data:{}\nx1_label:{}\nloss:{}".format(x1_data,x1_label,loss_data))


# ===========================nn.SmoothLoss===========
# flag = True
flag = False
if flag:
    x1_data = torch.linspace(-3, 3, steps=1000)
    x1_label = torch.zeros_like(x1_data)

    lossmoth1 = nn.SmoothL1Loss(reduction='none')
    loss_datamoth1 = lossmoth1(x1_data, x1_label)

    loss1 = nn.L1Loss(reduction='none')
    loss_data = loss1(x1_data, x1_label)

    plt.plot(x1_data.numpy(), loss_data.numpy(), label='L1Loss')
    plt.plot(x1_data.numpy(), loss_datamoth1.numpy(), label='SmoothL1Loss')
    plt.xlabel('xi-yi')
    plt.ylabel('loss value')
    plt.legend()
    plt.grid()
    plt.show()

# ===========================nn.PoissonNLLLoss===========
# flag = True
flag = False
if flag:
    x1_data = torch.randn((2, 2))
    x1_label = torch.randn((2, 2))*2+0.6

    loss1 = nn.PoissonNLLLoss(log_input=False, reduction='none',eps=1e-02)
    loss_data = loss1(x1_data, x1_label)

    print(x1_data)
    print(x1_label)
    print(loss_data)

    # computebyhan = torch.exp(x1_data)-x1_label*x1_data
    # print(computebyhan)

    computebyhan1 = x1_data - x1_label*torch.log(x1_data+1e-02)
    print(computebyhan1)

# ===========================nn.KLDivLoss===========
# flag = True
flag = False
if flag:
    x1_data = torch.tensor([[0.5, 0.4, 0.1], [0.2, 0.4, 0.4], [0.1, 0.5, 0.4], [0.1, 0.8, 0.1]])
    x1_datalog = torch.log(x1_data)
    x1_label = torch.tensor([[0.7, 0.2, 0.1], [0.1, 0.4, 0.5], [0.4, 0.2, 0.4], [0.3, 0.4, 0.3]],dtype=torch.float)

    lossd = nn.KLDivLoss(reduction='none')
    loss_data = lossd(x1_data, x1_label)
    print(loss_data)

    ln = x1_label*(torch.log(x1_label)-x1_data)
    print(ln)

# ===========================nn.MarginRankingLoss===========
# flag = True
flag = False
if flag:
    x1 = torch.tensor([[1], [2], [3]], dtype=torch.float)
    x2 = torch.tensor([[2], [2], [2]], dtype=torch.float)

    labeld = torch.tensor([1, 1, -1], dtype=torch.float)

    loss_f = nn.MarginRankingLoss(margin=0, reduction='none')

    loss = loss_f(x1, x2, labeld)
    print(loss)

# ===========================nn.MultiLabelMarginLoss===========
# flag = True
flag = False
if flag:
    x = torch.tensor([[0.4, 02., 0.4, 0.5]])
    y = torch.tensor([[0, 3, -1, -1]], dtype=torch.long)

    lossf = nn.MultiLabelMarginLoss(reduction='none')
    loss = lossf(x, y)

    print(loss)

    x = x[0]
    item1 = (1 - (x[0]-x[1]))+(1-(x[0]-x[2]))
    item2 = (1 - (x[3]-x[1]))+(1-(x[3]-x[2]))
    loss2 = (item1+item2)/4
    print(loss2)


# ===========================nn.SoftMarginLoss===========
# flag = True
flag = False
if flag:
    x = torch.tensor([[0.4, 0.6], [0.8, 0.2]])
    y = torch.tensor([[-1, 1], [1, -1]], dtype=torch.long)

    lossf = nn.SoftMarginLoss(reduction='none')
    loss = lossf(x, y)

    print(loss)

    idx = 0
    itemx = x[idx, idx]
    itemy = y[idx, idx]
    loss = torch.log(1+torch.exp(-itemy*itemx))
    print(loss)

# ===========================nn.MultiLabelSoftMarginLoss===========
# flag = True
flag = False
if flag:
    x = torch.tensor([[0.4, 0.6, 0.8, 0.2]])
    y = torch.tensor([[0, 1, 1, 0]], dtype=torch.long)

    lossf = nn.MultiLabelSoftMarginLoss(reduction='none')
    loss = lossf(x, y)

    print(loss)

    sum=0
    for i in range(4):
        if(y[0,i]==1):
            sum+=y[0, i]*torch.log((1+torch.exp(-x[0,i]))**(-1))
        else:
            sum+=torch.log(torch.exp(-x[0,i])/(1+torch.exp(-x[0,i])))
    sum=sum/4
    print(sum)


# ===========================nn.MultiMarginLoss===========
# ---------------------------------------------- 14 Multi Margin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.7], [0.2, 0.5, 0.3]])
    y = torch.tensor([1, 2], dtype=torch.long)

    loss_f = nn.MultiMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print("Multi Margin Loss: ", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    x = x[0]
    margin = 1

    i_0 = margin - (x[1] - x[0])
    # i_1 = margin - (x[1] - x[1])
    i_2 = margin - (x[1] - x[2])

    loss_h = (i_0 + i_2) / x.shape[0]

    print(loss_h)


# ---------------------------------------------- 15 Triplet Margin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:
    anchor = torch.tensor([[1.]])
    pos = torch.tensor([[2.]])
    neg = torch.tensor([[0.5]])

    lossd = nn.TripletMarginLoss(margin=1, p=1)

    loss = lossd(anchor, pos, neg)
    print(loss)

    ap = abs(pos-anchor)
    an = abs(anchor-neg)
    l = ap-an+1
    print(l)

# ---------------------------------------------- 16 Hinge Embedding Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.tensor([[1., 0.8, 0.5]])
    target = torch.tensor([[1, 1, -1]])

    loss_f = nn.HingeEmbeddingLoss(margin=1, reduction='none')

    loss = loss_f(inputs, target)

    print("Hinge Embedding Loss", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:
    margin = 1.
    loss = max(0, margin - inputs.numpy()[0, 2])

    print(loss)

# ---------------------------------------------- 17 Cosine Embedding Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    x1 = torch.tensor([[0.3, 0.5, 0.7], [0.3, 0.5, 0.7]])
    x2 = torch.tensor([[0.1, 0.3, 0.5], [0.1, 0.3, 0.5]])

    target = torch.tensor([[1, -1]], dtype=torch.float)

    loss_f = nn.CosineEmbeddingLoss(margin=0., reduction='none')

    loss = loss_f(x1, x2, target)

    print("Cosine Embedding Loss", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:
    margin = 0.

    def cosine(a, b):
        numerator = torch.dot(a, b)
        denominator = torch.norm(a, 2) * torch.norm(b, 2)
        return float(numerator/denominator)

    l_1 = 1 - (cosine(x1[0], x2[0]))

    l_2 = max(0, cosine(x1[0], x2[0]))

    print(l_1, l_2)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值