【PyTorch】4.2 损失函数(二)


任务简介:

本节学习pytorch中剩下的14种损失函数。

详细说明:

接上一节,本节学习pytorch的另外14种损失函数:

  1. nn.L1Loss

  2. nn.MSELoss

  3. nn.SmoothL1Loss

  4. nn.PoissonNLLLoss

  5. nn.KLDivLoss

  6. nn.MarginRankingLoss

  7. nn.MultiLabelMarginLoss

  8. nn.SoftMarginLoss

  9. nn.MultiLabelSoftMarginLoss

  10. nn.MultiMarginLoss

  11. nn.TripletMarginLoss

  12. nn.HingeEmbeddingLoss

  13. nn.CosineEmbeddingLoss

  14. nn.CTCLoss

三、PyTorch中的损失函数

5. nn.L1Loss

6. nn.MSELoss

在这里插入图片描述
测试L1LossMSELoss代码:

import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy as np
from tools.common_tools import set_seed

set_seed(1)  # 设置随机种子

# ------------------------------------------------- 5 L1 loss ----------------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.ones((2, 2))
    target = torch.ones((2, 2)) * 3

    loss_f = nn.L1Loss(reduction='none')
    loss = loss_f(inputs, target)

    print("input:{}\ntarget:{}\nL1 loss:{}".format(inputs, target, loss))

输出:

input:tensor([[1., 1.],
        [1., 1.]])
target:tensor([[3., 3.],
        [3., 3.]])
L1 loss:tensor([[2., 2.],
        [2., 2.]])
MSE loss:tensor([[4., 4.],
        [4., 4.]])

7. nn.SmoothL1Loss

在这里插入图片描述
测试代码:

# ------------------------------------------------- 7 Smooth L1 loss ----------------------------------------------
# flag = 0
flag = 1
if flag:
    inputs = torch.linspace(-3, 3, steps=500)
    target = torch.zeros_like(inputs)

    loss_f = nn.SmoothL1Loss(reduction='none')

    loss_smooth = loss_f(inputs, target)

    loss_l1 = np.abs(inputs.numpy())

    plt.plot(inputs.numpy(), loss_smooth.numpy(), label='Smooth L1 Loss')
    plt.plot(inputs.numpy(), loss_l1, label='L1 loss')
    plt.xlabel('x_i - y_i')
    plt.ylabel('loss value')
    plt.legend()
    plt.grid()
    plt.show()

输出:
在这里插入图片描述

8. nn.PoissonNLLLoss

在这里插入图片描述
测试代码:

# ------------------------------------------------- 8 Poisson NLL Loss ----------------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.randn((2, 2))
    target = torch.randn((2, 2))

    loss_f = nn.PoissonNLLLoss(log_input=True, full=False, reduction='none')
    loss = loss_f(inputs, target)
    print("input:{}\ntarget:{}\nPoisson NLL loss:{}".format(inputs, target, loss))

输出:

input:tensor([[0.6614, 0.2669],
        [0.0617, 0.6213]])
target:tensor([[-0.4519, -0.1661],
        [-1.5228,  0.3817]])
Poisson NLL loss:tensor([[2.2363, 1.3503],
        [1.1575, 1.6242]])

9. nn.KLDivLoss

在这里插入图片描述
测试代码:

# ----------------------------------------- 9 KL Divergence Loss ----------------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[0.5, 0.3, 0.2], [0.2, 0.3, 0.5]])
    # pytorch函数计算公式不是原始定义公式,其对输入默认已经取log了,在损失函数计算中比公式定义少了一个log(input)的操作
    # 因此公式定义里有一个log(y_i / x_i),在pytorch变为了 log(y_i) - x_i,
    inputs = F.log_softmax(inputs, 1)
    target = torch.tensor([[0.9, 0.05, 0.05], [0.1, 0.7, 0.2]], dtype=torch.float)

    loss_f_none = nn.KLDivLoss(reduction='none')
    loss_f_mean = nn.KLDivLoss(reduction='mean')
    loss_f_bs_mean = nn.KLDivLoss(reduction='batchmean')

    loss_none = loss_f_none(inputs, target)
    loss_mean = loss_f_mean(inputs, target)
    loss_bs_mean = loss_f_bs_mean(inputs, target)

    print("loss_none:\n{}\nloss_mean:\n{}\nloss_bs_mean:\n{}".format(loss_none, loss_mean, loss_bs_mean))

输出:

loss_none:
tensor([[-0.5448, -0.1648, -0.1598],
        [-0.2503, -0.4597, -0.4219]])
loss_mean:
-0.3335360288619995
loss_bs_mean:
-1.0006080865859985

mean是所有值之和除以个数(6),bs_mean是所有值之和除以批(2)。

10. nn.MarginRankingLoss

在这里插入图片描述
测试代码:

# ---------------------------------------------- 10 Margin Ranking Loss --------------------------------------------
# flag = 0
flag = 1
if flag:

    x1 = torch.tensor([[1], [2], [3]], dtype=torch.float)
    x2 = torch.tensor([[2], [2], [2]], dtype=torch.float)

    target = torch.tensor([1, 1, -1], dtype=torch.float)

    loss_f_none = nn.MarginRankingLoss(margin=0, reduction='none')

    loss = loss_f_none(x1, x2, target)

    print(loss)

输出:

tensor([[1., 1., 0.],
        [0., 0., 0.],
        [0., 0., 1.]])

11. nn.MultiLabelMarginLoss

在这里插入图片描述
测试代码:

# ---------------------------------------------- 11 Multi Label Margin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.4, 0.8]])
    y = torch.tensor([[0, 3, -1, -1]], dtype=torch.long)

    loss_f = nn.MultiLabelMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print(loss)

输出:

tensor([0.8500])

12. nn.SoftMarginLoss

在这里插入图片描述
测试代码:

# ---------------------------------------------- 12 SoftMargin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[0.3, 0.7], [0.5, 0.5]])
    target = torch.tensor([[-1, 1], [1, -1]], dtype=torch.float)

    loss_f = nn.SoftMarginLoss(reduction='none')

    loss = loss_f(inputs, target)

    print("SoftMargin: ", loss)

输出:

SoftMargin:  tensor([[0.8544, 0.4032],
        [0.4741, 0.9741]])

13. nn.MultiLabelSoftMarginLoss

在这里插入图片描述
测试代码:

# ---------------------------------------------- 13 MultiLabel SoftMargin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[0.3, 0.7, 0.8]])
    target = torch.tensor([[0, 1, 1]], dtype=torch.float)

    loss_f = nn.MultiLabelSoftMarginLoss(reduction='none')

    loss = loss_f(inputs, target)

    print("MultiLabel SoftMargin: ", loss)

输出:

MultiLabel SoftMargin: tensor([0.5429])

14. nn.MultiMarginLoss

在这里插入图片描述
测试代码:

# ---------------------------------------------- 14 Multi Margin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.7], [0.2, 0.5, 0.3]])
    y = torch.tensor([1, 2], dtype=torch.long)

    loss_f = nn.MultiMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print("Multi Margin Loss: ", loss)

输出:

Multi Margin Loss: tensor([0.8000, 0.7000])

15. nn.TripletMarginLoss

在这里插入图片描述
测试代码:

# ---------------------------------------------- 15 Triplet Margin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    anchor = torch.tensor([[1.]])
    pos = torch.tensor([[2.]])
    neg = torch.tensor([[0.5]])

    loss_f = nn.TripletMarginLoss(margin=1.0, p=1)

    loss = loss_f(anchor, pos, neg)

    print("Triplet Margin Loss", loss)

输出:

Triplet Margin Loss tensor(1.5000)

16. nn.HingeEmbeddingLoss

在这里插入图片描述
测试代码:

# ---------------------------------------------- 16 Hinge Embedding Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[1., 0.8, 0.5]])
    target = torch.tensor([[1, 1, -1]])

    loss_f = nn.HingeEmbeddingLoss(margin=1, reduction='none')

    loss = loss_f(inputs, target)

    print("Hinge Embedding Loss", loss)

输出:

Hinge Embedding Loss tensor([[1.0000, 0.8000, 0.5000]])

17. nn.CosineEmbeddingLoss

在这里插入图片描述
测试代码:

# ---------------------------------------------- 17 Cosine Embedding Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    x1 = torch.tensor([[0.3, 0.5, 0.7], [0.3, 0.5, 0.7]])
    x2 = torch.tensor([[0.1, 0.3, 0.5], [0.1, 0.3, 0.5]])

    target = torch.tensor([[1, -1]], dtype=torch.float)

    loss_f = nn.CosineEmbeddingLoss(margin=0., reduction='none')

    loss = loss_f(x1, x2, target)

    print("Cosine Embedding Loss", loss)

输出:

Cosine Embedding Loss tensor([[0.0167, 0.9833]])

18. nn.CTCLoss

在这里插入图片描述
测试代码:

# ---------------------------------------------- 18 CTC Loss -----------------------------------------
# flag = 0
flag = 1
if flag:
    T = 50      # Input sequence length
    C = 20      # Number of classes (including blank)
    N = 16      # Batch size
    S = 30      # Target sequence length of longest target in batch
    S_min = 10  # Minimum target length, for demonstration purposes

    # Initialize random batch of input vectors, for *size = (T,N,C)
    inputs = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()

    # Initialize random batch of targets (0 = blank, 1:C = classes)
    target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)

    input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
    target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)

    ctc_loss = nn.CTCLoss()
    loss = ctc_loss(inputs, target, input_lengths, target_lengths)

    print("CTC loss: ", loss)

输出:

CTC loss: tensor(7.5385, grad_fn=<MeanBackward0>)

四、总结

在这里插入图片描述

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Pytorch中,有两种常用的交叉熵损失函数用于分类问题:BCELoss和BCEWithLogitsLoss。这两种损失函数都是基于元交叉熵的概念。 BCELoss是Binary Cross Entropy Loss的缩写,它的输入是经过sigmoid处理后的预测值。它适用于预测值是概率的情况,输出的损失值是一个标量。 BCEWithLogitsLoss是在BCELoss的基础上进行了优化,它的输入是未经过sigmoid处理的预测值。它将sigmoid函数的计算和元交叉熵的计算合并到了一起,可以提高数值的稳定性和计算效率。 为什么要使用softmax函数?softmax函数能够将一个向量映射为概率分布,将输入的数值转化为概率值,使得各个类别的预测结果之和为1。在多分类问题中,softmax函数通常与交叉熵损失函数一起使用,用来计算预测概率与真实标签之间的差异。 总结起来,Pytorch中的交叉熵损失函数可以用于分类问题,其中BCELoss适用于预测值是概率的情况,而BCEWithLogitsLoss适用于未经过sigmoid处理的预测值。同时,softmax函数在多分类问题中常与交叉熵损失函数一起使用,用于计算预测概率与真实标签之间的差异。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [Pytorch交叉熵损失函数CrossEntropyLoss及BCE_withlogistic](https://blog.csdn.net/qq_41917697/article/details/112723261)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* [【Pytorch】BCELoss和BCEWithLogitsLoss损失函数详解](https://download.csdn.net/download/weixin_38513794/14036605)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值