pytorch 损失函数教程(1) torch.nn.CrossEntropyLoss

pytorch 损失函数教程(1) torch.nn.CrossEntropyLoss
pytorch 损失函数教程(2) torch.nn.NLLLoss
pytorch 损失函数教程(3) torch.nn.BCELoss

1、交叉熵损失
1.1 torch.nn.CrossEntropyLoss
注意的坑:torch.nn.CrossEntropyLoss的输入是模型的原始输出,不经过softmax!
(1)计算公式
计算N个标签的损失
ℓ ( x , y ) = L = { l 1 , … , l N } ⊤ , l n = − w y n log ⁡ exp ⁡ ( x n , y n ) ∑ c = 1 C exp ⁡ ( x n , c ) ⋅ 1 { y n ≠ ignore_index } \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} \log \frac{\exp(x_{n,y_n})}{\sum_{c=1}^C \exp(x_{n,c})} \cdot \mathbb{1}\{y_n \not= \text{ignore\_index}\} (x,y)=L={l1,,lN},ln=wynlogc=1Cexp(xn,c)exp(xn,yn)1{yn=ignore_index}
将N个标签的损失求和或平均(默认是平均)
ℓ ( x , y ) = { ∑ n = 1 N 1 ∑ n = 1 N w y n ⋅ 1 { y n ≠ ignore_index } l n , if reduction = ‘mean’; ∑ n = 1 N l n , if reduction = ‘sum’. \ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n} \cdot \mathbb{1}\{y_n \not= \text{ignore\_index}\}} l_n, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases} (x,y)={n=1Nn=1Nwyn1{yn=ignore_index}1ln,n=1Nln,if reduction=‘mean’;if reduction=‘sum’.

(2)对应pytorch的输出
1)模型原始预测输出 ,shape(n_samples,n_class), dtype(torch.float)
2)真实标签,shape=(n_samples), dtype(torch.long)

(3)例子1

    >>> loss = nn.CrossEntropyLoss()
    >>> input = torch.randn(3, 5, requires_grad=True) #模型原始预测输出,浮点数
    >>> target = torch.empty(3, dtype=torch.long).random_(5)#真实标签,整数
    >>> output = loss(input, target)
    >>> output.backward()

In [30]: input # 3个样本,
Out[30]: 
tensor([[-1.3361, -0.1120,  0.9913, -0.1192,  0.3797],
        [ 1.7727, -1.7945,  0.4141,  0.1564,  0.9800],
        [ 0.1120,  0.1938,  0.4124,  1.8057,  1.5692]], requires_grad=True)

In [32]: target #3个样本的标签
Out[32]: tensor([2, 4, 4])

(4)例子2
load data

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

iris = load_iris()
X = iris['data']
y = iris['target']
names = iris['target_names']
feature_names = iris['feature_names']

# Scale data to have mean 0 and variance 1 
# which is importance for convergence of the neural network
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Split the data set into training and testing
X_train, X_test, y_train, y_test = train_test_split(
    X_scaled, y, test_size=0.2, random_state=2)

build model

import torch
import torch.nn as nn
class Model(nn.Module):
    def __init__(self, input_dim):
        super(Model, self).__init__()
        self.layer1 = nn.Linear(input_dim, 50)
        self.layer2 = nn.Linear(50, 50)
        self.layer3 = nn.Linear(50, 3)
        
    def forward(self, x):
        x = F.relu(self.layer1(x))
        x = F.relu(self.layer2(x))
        #x = F.softmax(self.layer3(x), dim=1)
        x = self.layer3(x) #
        return x

model     = Model(X_train.shape[1])
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
loss_fn   = nn.CrossEntropyLoss()

train model

from torch.autograd import Variable
import tqdm
import numpy as np
import torch.nn.functional as F

EPOCHS  = 100
X_train = Variable(torch.from_numpy(X_train)).float()
y_train = Variable(torch.from_numpy(y_train)).long()
X_test  = Variable(torch.from_numpy(X_test)).float()
y_test  = Variable(torch.from_numpy(y_test)).long()

loss_list     = np.zeros((EPOCHS,))
accuracy_list = np.zeros((EPOCHS,))

for epoch in tqdm.trange(EPOCHS):
    y_pred = model(X_train)
    loss = loss_fn(y_pred, y_train)
    loss_list[epoch] = loss.item()
    
    # Zero gradients
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    
    with torch.no_grad():
        y_pred = model(X_test)
        correct = (torch.argmax(y_pred, dim=1) == y_test).type(torch.FloatTensor)
        accuracy_list[epoch] = correct.mean()

plot train accuracy and validation accuracy

 import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(2, figsize=(12, 6), sharex=True)

ax1.plot(accuracy_list)
ax1.set_ylabel("validation accuracy")
ax2.plot(loss_list)
ax2.set_ylabel("validation loss")
ax2.set_xlabel("epochs");
plt.show()

plot roc curve

from sklearn.metrics import roc_curve, auc
from sklearn.preprocessing import OneHotEncoder

plt.figure(figsize=(10, 10))
plt.plot([0, 1], [0, 1], 'k--')

# One hot encoding
enc = OneHotEncoder()
Y_onehot = enc.fit_transform(y_test[:, np.newaxis]).toarray()

with torch.no_grad():
    y_pred = model(X_test).numpy()
    fpr, tpr, threshold = roc_curve(Y_onehot.ravel(), y_pred.ravel())
    
plt.plot(fpr, tpr, label='AUC = {:.3f}'.format(auc(fpr, tpr)))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend();
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值