Pytorch —— 损失函数(一)

1、损失函数概念

损失函数:衡量模型输出与真实标签的差异;
在这里插入图片描述
上图是一个一元线性回归的拟合过程,绿色的点是训练的样本,蓝色的直线是训练好的模型。这个模型没有很好地拟合所有的数据点,也就是说,每个数据点并没有都在模型上,所以数据点会产生Loss。

现在我们认识一下什么是损失函数、代价函数、目标函数:

损失函数(Loss Function) L o s s = f ( y ^ , y ) Loss=f(\hat{y},y) Loss=f(y^,y)这是计算一个样本的损失;

代价函数(Cost Function) cos ⁡ t = 1 N ∑ i N f ( y i ′ , y i ) \cos t=\frac{1}{N} \sum_{i}^{N} f\left(y_{i}^{\prime}, y_{i}\right) cost=N1iNf(yi,yi)这是整个训练集的样本的损失的平均值;

目标函数(Objective Function)
目标函数是一个更广泛的概念,在机器学习中,目标函数包含Cost和Regularization(正则项): O b j = C o s t + R e g u l a r i z a t i o n Obj=Cost+Regularization Obj=Cost+Regularization

现在了解一下Pytorch中的Loss:

class _loss(Module):
    def __init__(self, size_average=None, reduce=None, reduction='mean'):
        super(_loss,self).__init__()
        if size_average is not None or reduce is not None:
            self.reduction = _Reduction.legacy_get_string(size_average, reduce)
        else:
            self.reduction = reduction

Pytorch中的Loss还是继承于Module,所以Loss相当于一个网络层。init()初始化函数中有三个参数,其中size_average和reduce即将被舍弃,其功能在reduction中已经被实现。

下面观察一下交叉熵函数的具体使用和原理。

2、交叉熵损失函数

这里使用人民币二分类代码观察一下交叉熵损失函数的使用:

import os
import random
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
import torch.optim as optim
from PIL import Image
from matplotlib import pyplot as plt
from model.lenet import LeNet
from toolss.my_dataset import RMBDataset
from toolss.common_tools import transform_invert, set_seed

set_seed(1)  # 设置随机种子
rmb_label = {"1": 0, "100": 1}

# 参数设置
MAX_EPOCH = 10
BATCH_SIZE = 16
LR = 0.01
log_interval = 10
val_interval = 1

# ============================ step 1/5 数据 ============================

split_dir = os.path.join("F:/Pytorch框架班/Pytorch-Camp-master/代码合集/rmb_split")
train_dir = os.path.join(split_dir, "train")
valid_dir = os.path.join(split_dir, "valid")

norm_mean = [0.485, 0.456, 0.406]
norm_std = [0.229, 0.224, 0.225]

train_transform = transforms.Compose([
    transforms.Resize((32, 32)),
    transforms.RandomCrop(32, padding=4),
    transforms.RandomGrayscale(p=0.8),
    transforms.ToTensor(),
    transforms.Normalize(norm_mean, norm_std),
])

valid_transform = transforms.Compose([
    transforms.Resize((32, 32)),
    transforms.ToTensor(),
    transforms.Normalize(norm_mean, norm_std),
])

# 构建MyDataset实例
train_data = RMBDataset(data_dir=train_dir, transform=train_transform)
valid_data = RMBDataset(data_dir=valid_dir, transform=valid_transform)

# 构建DataLoder
train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
valid_loader = DataLoader(dataset=valid_data, batch_size=BATCH_SIZE)

# ============================ step 2/5 模型 ============================

net = LeNet(classes=2)
net.initialize_weights()

# ============================ step 3/5 损失函数 ============================
loss_functoin = nn.CrossEntropyLoss()                                                   # 选择损失函数

# ============================ step 4/5 优化器 ============================
optimizer = optim.SGD(net.parameters(), lr=LR, momentum=0.9)                        # 选择优化器
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1)     # 设置学习率下降策略

# ============================ step 5/5 训练 ============================
train_curve = list()
valid_curve = list()

for epoch in range(MAX_EPOCH):

    loss_mean = 0.
    correct = 0.
    total = 0.

    net.train()
    for i, data in enumerate(train_loader):

        # forward
        inputs, labels = data
        outputs = net(inputs)

        # backward
        optimizer.zero_grad()
        loss = loss_functoin(outputs, labels)
        loss.backward()

        # update weights
        optimizer.step()

        # 统计分类情况
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).squeeze().sum().numpy()

        # 打印训练信息
        loss_mean += loss.item()
        train_curve.append(loss.item())
        if (i+1) % log_interval == 0:
            loss_mean = loss_mean / log_interval
            print("Training:Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(
                epoch, MAX_EPOCH, i+1, len(train_loader), loss_mean, correct / total))
            loss_mean = 0.

    scheduler.step()  # 更新学习率

    # validate the model
    if (epoch+1) % val_interval == 0:

        correct_val = 0.
        total_val = 0.
        loss_val = 0.
        net.eval()
        with torch.no_grad():
            for j, data in enumerate(valid_loader):
                inputs, labels = data
                outputs = net(inputs)
                loss = loss_functoin(outputs, labels)

                _, predicted = torch.max(outputs.data, 1)
                total_val += labels.size(0)
                correct_val += (predicted == labels).squeeze().sum().numpy()

                loss_val += loss.item()

            valid_curve.append(loss_val)
            print("Valid:\t Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(
                epoch, MAX_EPOCH, j+1, len(valid_loader), loss_val, correct / total))


train_x = range(len(train_curve))
train_y = train_curve

train_iters = len(train_loader)
valid_x = np.arange(1, len(valid_curve)+1) * train_iters*val_interval # 由于valid中记录的是epochloss,需要对记录点进行转换到iterations
valid_y = valid_curve

plt.plot(train_x, train_y, label='Train')
plt.plot(valid_x, valid_y, label='Valid')

plt.legend(loc='upper right')
plt.ylabel('loss value')
plt.xlabel('Iteration')
plt.show()

# ============================ inference ============================

BASE_DIR = os.path.dirname(os.path.abspath(__file__))
test_dir = os.path.join(BASE_DIR, "test_data")

test_data = RMBDataset(data_dir=test_dir, transform=valid_transform)
valid_loader = DataLoader(dataset=test_data, batch_size=1)

for i, data in enumerate(valid_loader):
    # forward
    inputs, labels = data
    outputs = net(inputs)
    _, predicted = torch.max(outputs.data, 1)

    rmb = 1 if predicted.numpy()[0] == 0 else 100

    img_tensor = inputs[0, ...]  # C H W
    img = transform_invert(img_tensor, train_transform)
    plt.imshow(img)
    plt.title("LeNet got {} Yuan".format(rmb))
    plt.show()
    plt.pause(0.5)
    plt.close()

我们分析一下代码中的loss_functoin = nn.CrossEntropyLoss() 具体作用,通过单步调试进入代码中查看loss_functoin = nn.CrossEntropyLoss() 。

class CrossEntropyLoss(_WeightedLoss):
    __constants__ = ['weight', 'ignore_index', 'reduction']

    def __init__(self, weight=None, size_average=None, ignore_index=-100,
                 reduce=None, reduction='mean'):
        super(CrossEntropyLoss, self).__init__(weight, size_average, reduce, reduction)
        self.ignore_index = ignore_index

    def forward(self, input, target):
        return F.cross_entropy(input, target, weight=self.weight,
                               ignore_index=self.ignore_index, reduction=self.reduction)

通过代码可以发现CrossEntropyLoss是继承于_WeightedLoss,点击代码中的:

super(CrossEntropyLoss, self).__init__(weight, size_average, reduce, reduction)

进入

class _WeightedLoss(_Loss):
    def __init__(self, weight=None, size_average=None, reduce=None, reduction='mean'):
        super(_WeightedLoss, self).__init__(size_average, reduce, reduction)
        self.register_buffer('weight', weight)

可以看到_WeightedLoss也是继承于_Loss类,再单步调试代码:

super(_WeightedLoss, self).__init__(size_average, reduce, reduction)

进入

class _Loss(Module):
    def __init__(self, size_average=None, reduce=None, reduction='mean'):
        super(_Loss, self).__init__()
        if size_average is not None or reduce is not None:
            self.reduction = _Reduction.legacy_get_string(size_average, reduce)
        else:
            self.reduction = reduction

这时候进入了_Loss类,这个类是继承于Module类的,通过编译:

loss_functoin = nn.CrossEntropyLoss() 

就构建了一个loss_function,从loss_function的构建过程中知道,nn.CrossEntropyLoss是一个Module。Debug程序到:

loss = loss_functoin(outputs, labels)

因为loss_functoin()是一个Module,所以输入outputs和labels,其实就是执行一个forward(),一个模型模块必须有一个forward()函数,通过步进调试进入查看具体是怎么实现的:

    def __call__(self, *input, **kwargs):
        for hook in self._forward_pre_hooks.values():
            result = hook(self, input)
            if result is not None:
                if not isinstance(result, tuple):
                    result = (result,)
                input = result
        if torch._C._get_tracing_state():
            result = self._slow_forward(*input, **kwargs)
        else:
            result = self.forward(*input, **kwargs)
        for hook in self._forward_hooks.values():
            hook_result = hook(self, input, result)
            if hook_result is not None:
                result = hook_result
        if len(self._backward_hooks) > 0:
            var = result
            while not isinstance(var, torch.Tensor):
                if isinstance(var, dict):
                    var = next((v for v in var.values() if isinstance(v, torch.Tensor)))
                else:
                    var = var[0]
            grad_fn = var.grad_fn`在这里插入代码片`
            if grad_fn is not None:
                for hook in self._backward_hooks.values():
                    wrapper = functools.partial(hook, self)
                    functools.update_wrapper(wrapper, hook)
                    grad_fn.register_hook(wrapper)
        return result

我们关注程序中的代码:

result = self.forward(*input, **kwargs)

通过步进调试进入:

    def forward(self, input, target):
        return F.cross_entropy(input, target, weight=self.weight,
                               ignore_index=self.ignore_index, reduction=self.reduction)

通过代码可以发现调用了cross_entropy(),接着通过步进,进入Function模块中的cross_entropy()部分,其代码如下:

    if size_average is not None or reduce is not None:
        reduction = _Reduction.legacy_get_string(size_average, reduce)
    return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)

这部分代码首先会进行reduction的判断,然后会进入nll_loss的判断。

2.1 nn.CrossEntropyLoss

功能:nn.LogSoftmax()与nn.NLLLoss()结合,进行交叉熵计算;
主要参数

  • weight:各类别的loss设置权值;
  • ignore_index:忽略某个类别;
  • reduction:计算模型,可为none/sum/mean,none是逐个元素计算,sum是所有元素求和,返回标量,mean是加权平均,返回标量;
nn.CrossEntropyLoss(weight=None,
					size_average=None,
					ignore_index=-100,
					reduce=None,
					reduction='mean')

交叉熵损失函数常常用于分类任务,分类任务中常常需要计算两个输出的概率值,因为分类任务中的输出通常以概率值为主,所以交叉熵是衡量两个概率分布之间的差异。所以交叉熵值越低表示两个分布越近。

为什么交叉熵值越低两个分布越近呢?这需要从交叉熵和相对熵的关系说起,说到交叉熵和相对熵,就不得不提到信息熵。下面分析信息熵、相对熵、交叉熵之间的关系。

交叉熵 = 信息熵 + 相对熵

首先介绍熵的概念,熵指的是信息熵,用来描述一个事件的不确定性,一个事件越不确定熵越大,熵是自信息的期望。

自信息用于衡量单个事件的不确定性,其公式为: I ( x ) = − l o g [ p ( x ) ] I(x)=-log[p(x)] I(x)=log[p(x)]p(x)指的是事件的概率,熵是整个概率分布的不确定性,用来描述整个概率分布,对自信息求期望,其公式为: H ( P ) = E x ∼ p [ I ( x ) ] = − ∑ i N P ( x i ) log ⁡ P ( x i ) \mathrm{H}(\mathrm{P})=E_{x \sim p}[I(x)]=-\sum_{i}^{N} P\left(x_{i}\right) \log P\left(x_{i}\right) H(P)=Exp[I(x)]=iNP(xi)logP(xi)
下图是伯努利分布的信息熵的分布:
在这里插入图片描述
从图中可以发现,当事件的概率为0.5时,其信息熵最大,这也表示事件的不确定性最大,其熵最大值为0.69。

相对熵也称为KL散度,相对熵用于衡量两个分布之间的差异,也就是两个分布之间的距离,虽然相对熵可以计算两个分布之间的距离,但是相对熵不是一个距离函数,因为距离函数具有对称性,对称性指的是P到Q的距离等于Q到P的距离,但是相对熵不具备距离函数的对称性,看一下相对熵的计算公式: D K L ( P , Q ) = E x ∼ p [ log ⁡ P ( x ) Q ( x ) ] \boldsymbol{D}_{K L}(\boldsymbol{P}, \boldsymbol{Q})=\boldsymbol{E}_{\boldsymbol{x} \sim p}\left[\log \frac{P(\boldsymbol{x})}{Q(\boldsymbol{x})}\right] DKL(P,Q)=Exp[logQ(x)P(x)]公式中P是真实的分布,Q是模型输出的分布,这里需要用Q的分布去逼近P的分布,所以相对熵不具备对称性。

下面看一下交叉熵的公式: H ( P , Q ) = − ∑ i = 1 N P ( x i ) log ⁡ Q ( x i ) \mathrm{H}(\boldsymbol{P}, \boldsymbol{Q})=-\sum_{i=1}^{N} \boldsymbol{P}\left(\boldsymbol{x}_{\boldsymbol{i}}\right) \log \boldsymbol{Q}\left(\boldsymbol{x}_{\boldsymbol{i}}\right) H(P,Q)=i=1NP(xi)logQ(xi)交叉熵也是衡量两个概率分布P和Q的相似度。

下面对相对熵的公式进行推导展开,观察相对熵与信息熵、交叉熵的关系: D K L ( P , Q ) = E x ∼ p [ log ⁡ P ( x ) Q ( x ) ] \boldsymbol{D}_{K L}(\boldsymbol{P}, \boldsymbol{Q})=\boldsymbol{E}_{\boldsymbol{x} \sim p}\left[\log \frac{P(\boldsymbol{x})}{\boldsymbol{Q}(\boldsymbol{x})}\right] DKL(P,Q)=Exp[logQ(x)P(x)] = E x ∼ p [ log ⁡ P ( x ) − log ⁡ Q ( x ) ] =\boldsymbol{E}_{\boldsymbol{x} \sim p}[\log \boldsymbol{P}(\boldsymbol{x})-\log \boldsymbol{Q}(\boldsymbol{x})] =Exp[logP(x)logQ(x)] = ∑ l = 1 N P ( x i ) [ log ⁡ P ( x i ) − log ⁡ Q ( x i ) ] =\sum_{l=1}^{N} P\left(x_{i}\right)\left[\log P\left(x_{i}\right)-\log Q\left(x_{i}\right)\right] =l=1NP(xi)[logP(xi)logQ(xi)] = ∑ i = 1 N P ( x i ) log ⁡ P ( x i ) − ∑ i = 1 N P ( x i ) log ⁡ Q ( x i ) =\sum_{i=1}^{N} P\left(x_{i}\right) \log P\left(x_{i}\right)-\sum_{i=1}^{N} P\left(x_{i}\right) \log Q\left(x_{i}\right) =i=1NP(xi)logP(xi)i=1NP(xi)logQ(xi)观察公式可以看到,相对熵的公式由信息熵和交叉熵组成,因此相对熵的公式可以表示为: D K L ( P , Q ) = H ( P , Q ) − H ( P ) D_{K L}(P, Q)=H(P,Q)-H(P) DKL(P,Q)=H(P,Q)H(P)通过公式转换可以得到交叉熵的公式表示: H ( P , Q ) = D K L ( P , Q ) + H ( P ) \mathrm{H}(\boldsymbol{P}, \boldsymbol{Q})=\boldsymbol{D}_{K L}(\boldsymbol{P}, \boldsymbol{Q})+\mathrm{H}(\boldsymbol{P}) H(P,Q)=DKL(P,Q)+H(P)公式中的P是真实的概率分布,也就是训练集中样本的分布,Q是模型输出的分布,所以在机器学习模型优化交叉熵等价于优化相对熵,因为交叉熵公式中的信息熵 H ( P ) H(P) H(P)是训练集的信息,因为训练集是固定的,所以 H ( P ) H(P) H(P)是一个常数,所以交叉熵在优化的时候是优化相对熵。

熟悉了信息熵、相对熵、交叉熵的信息后,现在正式了解Pytorch中的nn.CrossEntropyLoss。交叉熵的计算公式如下: H ( P , Q ) = − ∑ i = 1 N P ( x i ) log ⁡ Q ( x i ) \mathrm{H}(P, Q)=-\sum_{i=1}^{N} P\left(x_{i}\right) \log Q\left(x_{i}\right) H(P,Q)=i=1NP(xi)logQ(xi) loss ⁡ ( x ,  class  ) = − log ⁡ ( exp ⁡ ( x [  class  ] ) ∑ j exp ⁡ ( x [ j ] ) ) = − x [  class  ] + log ⁡ ( ∑ j exp ⁡ ( x [ j ] ) ) \operatorname{loss}(x, \text { class })=-\log \left(\frac{\exp (x[\text { class }])}{\sum_{j} \exp (x[j])}\right)=-x[\text { class }]+\log \left(\sum_{j} \exp (x[j])\right) loss(x, class )=log(jexp(x[j])exp(x[ class ]))=x[ class ]+log(jexp(x[j]))公式中的x是一个概率值,class是一个类别值,括号中执行的是softmax操作,softmax的作用是将概率值归一化到和为1的概率值。

对比交叉熵的公式定义,公式中有一个 P ( x i ) P(x_i) P(xi),这个在展开公式中值为1,因为样本是固定的,所以 P ( x i ) P(x_i) P(xi)是定值。因为只计算一个样本,所以没有公式定义中的求和。

现在看一下nn.CrossEntropyLoss()的主要参数,第一个参数weight,其功能为各个类别的loss设置权值,加了权值的交叉熵计算公式为: loss ⁡ ( x ,  class  ) = weight ⁡ [  class  ] ( − x [  class  ] + log ⁡ ( ∑ j exp ⁡ ( x [ j ] ) ) ) \operatorname{loss}(x, \text { class })=\operatorname{weight}[\text { class }]\left(-x[\text { class }]+\log \left(\sum_{j} \exp (x[\mathrm{j}])\right)\right) loss(x, class )=weight[ class ](x[ class ]+log(jexp(x[j])))比如说第零类,为了让模型更关注第零类,可以将其weight设置为1.2倍,这是weight的作用;

第二个参数是ignore_index,用于指定某一类别不用计算损失,例如一千类的分类任务中,忽略第999类的损失,可以设置ignore_index=999。

第三个参数为reduction,用于指定计算模型,有三种计算模型,分别为none/sum/mean,none是逐个元素计算损失,sum是所有元素的损失求和,返回标量,mean是对所元素的损失进行加权平均,返回标量。在mean模型中,如果不设置参数weight的时候,就是单纯的期望平均。

下面通过代码学习CrossEntropyLoss中各种参数的功能:

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

# fake data
inputs = torch.tensor([[1, 2], [1, 3], [1, 3]], dtype=torch.float)
target = torch.tensor([0, 1, 1], dtype=torch.long)

# ----------------------------------- CrossEntropy loss: reduction -----------------------------------
# flag = 0
flag = 1
if flag:
    # def loss function
    loss_f_none = nn.CrossEntropyLoss(weight=None, reduction='none')
    loss_f_sum = nn.CrossEntropyLoss(weight=None, reduction='sum')
    loss_f_mean = nn.CrossEntropyLoss(weight=None, reduction='mean')

    # forward
    loss_none = loss_f_none(inputs, target)
    loss_sum = loss_f_sum(inputs, target)
    loss_mean = loss_f_mean(inputs, target)

    # view
    print("Cross Entropy Loss:\n ", loss_none, loss_sum, loss_mean)

代码的输出为:

Cross Entropy Loss:
  tensor([1.3133, 0.1269, 0.1269]) tensor(1.5671) tensor(0.5224)

输出结果中的tensor([1.3133, 0.1269, 0.1269])是loss_f_none = nn.CrossEntropyLoss(weight=None, reduction=‘none’);
输出结果中的tensor(1.5671)是loss_f_sum = nn.CrossEntropyLoss(weight=None, reduction=‘sum’);
输出结果中的tensor(0.5224)是loss_f_mean = nn.CrossEntropyLoss(weight=None, reduction=‘mean’);

下面通过手算的形式验证公式是否正确,只计算第一个样本的损失:

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

# fake data
inputs = torch.tensor([[1, 2], [1, 3], [1, 3]], dtype=torch.float)
target = torch.tensor([0, 1, 1], dtype=torch.long)

# ----------------------------------- CrossEntropy loss: reduction -----------------------------------
# flag = 0
flag = 1
if flag:
    # def loss function
    loss_f_none = nn.CrossEntropyLoss(weight=None, reduction='none')
    loss_f_sum = nn.CrossEntropyLoss(weight=None, reduction='sum')
    loss_f_mean = nn.CrossEntropyLoss(weight=None, reduction='mean')

    # forward
    loss_none = loss_f_none(inputs, target)
    loss_sum = loss_f_sum(inputs, target)
    loss_mean = loss_f_mean(inputs, target)

    # view
    print("Cross Entropy Loss:\n ", loss_none, loss_sum, loss_mean)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    idx = 0

    input_1 = inputs.detach().numpy()[idx]      # [1, 2]
    target_1 = target.numpy()[idx]              # [0]

    # 第一项
    x_class = input_1[target_1]

    # 第二项
    sigma_exp_x = np.sum(list(map(np.exp, input_1)))
    log_sigma_exp_x = np.log(sigma_exp_x)

    # 输出loss
    loss_1 = -x_class + log_sigma_exp_x

    print("第一个样本loss为: ", loss_1)

其输出为:

Cross Entropy Loss:
  tensor([1.3133, 0.1269, 0.1269]) tensor(1.5671) tensor(0.5224)
第一个样本loss为:  1.3132617

通过代码验证了公式的正确性。

下面通过代码观察weight参数的作用,有几个类别就需要设置几个权重:

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

# fake data
inputs = torch.tensor([[1, 2], [1, 3], [1, 3]], dtype=torch.float)
target = torch.tensor([0, 1, 1], dtype=torch.long)

# ----------------------------------- CrossEntropy loss: reduction -----------------------------------
# flag = 0
flag = 1
if flag:
    # def loss function
    loss_f_none = nn.CrossEntropyLoss(weight=None, reduction='none')
    loss_f_sum = nn.CrossEntropyLoss(weight=None, reduction='sum')
    loss_f_mean = nn.CrossEntropyLoss(weight=None, reduction='mean')

    # forward
    loss_none = loss_f_none(inputs, target)
    loss_sum = loss_f_sum(inputs, target)
    loss_mean = loss_f_mean(inputs, target)

    # view
    print("Cross Entropy Loss:\n ", loss_none, loss_sum, loss_mean)

# ----------------------------------- weight -----------------------------------
# flag = 0
flag = 1
if flag:
    # def loss function
    weights = torch.tensor([1, 2], dtype=torch.float)  # 第一个类别的权重为1,第二个类别的权重为2,1代表保持原权重不变
    # weights = torch.tensor([0.7, 0.3], dtype=torch.float)

    loss_f_none_w = nn.CrossEntropyLoss(weight=weights, reduction='none')
    loss_f_sum = nn.CrossEntropyLoss(weight=weights, reduction='sum')
    loss_f_mean = nn.CrossEntropyLoss(weight=weights, reduction='mean')

    # forward
    loss_none_w = loss_f_none_w(inputs, target)
    loss_sum = loss_f_sum(inputs, target)
    loss_mean = loss_f_mean(inputs, target)

    # view
    print("\nweights: ", weights)
    print(loss_none_w, loss_sum, loss_mean)

比较添加权重和不添加权重的loss输出比较:

Cross Entropy Loss:
  tensor([1.3133, 0.1269, 0.1269]) tensor(1.5671) tensor(0.5224)

weights:  tensor([1., 2.])
tensor([1.3133, 0.2539, 0.2539]) tensor(1.8210) tensor(0.3642)

可以看到,label为1的数据的loss增大了一倍。

2.2 nn.NLLLoss

功能:实现负对数似然函数中的负号功能,简单来说,就是对输入取负号;
主要参数

  • weight:各类别的loss权重设置;
  • ignore_index:忽略某个类别;
  • reduction:计算模式,可为none/sum/mean;
nn.NLLLoss(weight=None,
			size_average=None,
			ignore_index=-100,
			reduce=None,
			reduction='mean')

nn.NLLLoss的计算公式为: l n = − w y n x n , y n l_{n}=-w_{y_{n}} x_{n, y_{n}} ln=wynxn,yn ℓ ( x , y ) = L = { l 1 , … , l N } \ell(x, y)=L=\left\{l_{1}, \ldots, l_{N}\right\} (x,y)=L={l1,,lN}公式中的 w y n w_{y_n} wyn是weight中设置的权值,如果weight=None,则默认为1,公式中的x是输入神经元的输出值,其参数和上面介绍的nn.CrossEntropyloss是一样的。

下面通过代码查看nn.NLLLoss的作用:

weights = torch.tensor([1, 1], dtype=torch.float)

loss_f_none_w = nn.NLLLoss(weight=weights, reduction='none')
loss_f_sum = nn.NLLLoss(weight=weights, reduction='sum')
loss_f_mean = nn.NLLLoss(weight=weights, reduction='mean')

# forward
loss_none_w = loss_f_none_w(inputs, target)
loss_sum = loss_f_sum(inputs, target)
loss_mean = loss_f_mean(inputs, target)

# view
print("\nweights: ", weights)
print("NLL Loss", loss_none_w, loss_sum, loss_mean)

运行代码得到其输出为:

weights:  tensor([1., 1.])
NLL Loss tensor([-1., -3., -3.]) tensor(-7.) tensor(-2.3333)

通过观察输出结果,可以知道其输出值是输入值的负数。

2.3 nn.BCELoss

功能:二分类交叉熵;
注意事项:输入值取值在[0,1];
主要参数

  • weight:各类别的loss权重设置;
  • ignore_index:忽略某个类别;
  • reduction:计算模式,可为none/sum/mean;
nn.BCELoss(weight=None,
			size_average=None,
			reduce=None,
			reduction='mean')

其计算公式为: l n = − w n [ y n ⋅ log ⁡ x n + ( 1 − y n ) ⋅ log ⁡ ( 1 − x n ) ] l_{n}=-w_{n}\left[y_{n} \cdot \log x_{n}+\left(1-y_{n}\right) \cdot \log \left(1-x_{n}\right)\right] ln=wn[ynlogxn+(1yn)log(1xn)]公式中 y n y_n yn的值为0或者1。

通过代码观察nn.BCELoss的作用:

inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)

target_bce = target

# itarget
inputs = torch.sigmoid(inputs)  # 注意输入必须是0-1之间的值

weights = torch.tensor([1, 1], dtype=torch.float)

loss_f_none_w = nn.BCELoss(weight=weights, reduction='none')
loss_f_sum = nn.BCELoss(weight=weights, reduction='sum')
loss_f_mean = nn.BCELoss(weight=weights, reduction='mean')

# forward
loss_none_w = loss_f_none_w(inputs, target_bce)
loss_sum = loss_f_sum(inputs, target_bce)
loss_mean = loss_f_mean(inputs, target_bce)

# view
print("\nweights: ", weights)
print("BCE Loss", loss_none_w, loss_sum, loss_mean)

代码的输出结果为:

weights:  tensor([1., 1.])
BCE Loss tensor([[0.3133, 2.1269],
        [0.1269, 2.1269],
        [3.0486, 0.0181],
        [4.0181, 0.0067]]) tensor(11.7856) tensor(1.4732)

2.4 nn.BCEWithLogitsLoss

功能:结合Sigmoid与二分类交叉熵;
注意事项:网络最后不加sigmoid函数;
主要参数

  • pos_weight:正样本的权值;
  • weight:各类别的loss权重设置;
  • ignore_index:忽略某个类别;
  • reduction:计算模式,可为none/sum/mean;
nn.BCEWithLogitsLoss(weight=None,
					sizie_average=None,
					reduce=None,
					reduction='mean',
					pos_weight=None)

nn.BCEWithLogistLoss的计算公式为: l n = − w n [ y n ⋅ log ⁡ σ ( x n ) + ( 1 − y n ) ⋅ log ⁡ ( 1 − σ ( x n ) ) ] l_{n}=-w_{n}\left[y_{n} \cdot \log \sigma\left(x_{n}\right)+\left(1-y_{n}\right) \cdot \log \left(1-\sigma\left(x_{n}\right)\right)\right] ln=wn[ynlogσ(xn)+(1yn)log(1σ(xn))]

nn.BCAWithLogitsLoss中参数pos_weight的作用是均衡正负样本,其作用是正样本的loss乘于pos_weight系数,比如正样本有100个,负样本有300个,正负样本的比例为 1 3 \frac{1}{3} 31,所以pos_weight可以设置为3,也就是正样本的loss乘于3,这样就等价于正样本为300个,负样本为300个,实现正负样本的均衡。

下面通过代码观察nn.BCEWithLogitsLoss的具体作用:

inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)

target_bce = target

# inputs = torch.sigmoid(inputs)

weights = torch.tensor([1, 1], dtype=torch.float)

loss_f_none_w = nn.BCEWithLogitsLoss(weight=weights, reduction='none')
loss_f_sum = nn.BCEWithLogitsLoss(weight=weights, reduction='sum')
loss_f_mean = nn.BCEWithLogitsLoss(weight=weights, reduction='mean')

# forward
loss_none_w = loss_f_none_w(inputs, target_bce)
loss_sum = loss_f_sum(inputs, target_bce)
loss_mean = loss_f_mean(inputs, target_bce)

# view
print("\nweights: ", weights)
print(loss_none_w, loss_sum, loss_mean)

其输出为:

weights:  tensor([1., 1.])
tensor([[0.3133, 2.1269],
        [0.1269, 2.1269],
        [3.0486, 0.0181],
        [4.0181, 0.0067]]) tensor(11.7856) tensor(1.4732)

以上就是我们要总结的关于损失函数的内容。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值