PyTorch进阶训练技巧

自定义损失函数

PyTorch在torch.nn模块为我们提供了许多常用的损失函数,比如:MSELossL1LossBCELoss

但是随着深度学习的发展,出现了越来越多的非官方提供的Loss,比如DiceLossHuberLossSobolevLoss

这些Loss Function专门针对一些非通用的模型,PyTorch不能将他们全部添加到库中去,因此这些损失函数的实现则需要我们通过自定义损失函数来实现。另外,在科学研究中,我们往往会提出全新的损失函数来提升模型的表现,这时我们既无法使用PyTorch自带的损失函数,也没有相关的博客供参考,此时自己实现损失函数就显得更为重要了。

经过本节的学习,你将收获:

  • 掌握如何自定义损失函数
import torch
import torch.nn as nn
import torch.nn.functional as F

以函数方式定义

事实上,损失函数仅仅是一个函数而已,因此我们可以通过直接以函数定义的方式定义一个自己的函数,如下所示:

def my_loss(output, target):
    loss = torch.mean((output-target)**2)
    return loss

以类方式定义

虽然以函数定义的方式很简单,但是以类方式定义更加常用

在以类方式定义损失函数时,我们如果看每一个损失函数的继承关系我们就可以发现Loss函数部分继承自_loss, 部分继承自_WeightedLoss, 而_WeightedLoss继承自_loss_loss继承自 nn.Module。我们可以将其当作神经网络的一层来对待,同样地,我们的损失函数类就需要继承自nn.Module类,在下面的例子中我们以DiceLoss为例向大家讲述。

Dice Loss是一种在分割领域常见的损失函数,定义如下:

D S C = 2 ∣ X ∩ Y ∣ ∣ X ∣ + ∣ Y ∣ DSC = \frac{2|X∩Y|}{|X|+|Y|} DSC=X+Y2XY

实现代码:

class DiceLoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(DiceLoss, self).__init__()
        
    def forward(self, inputs, targets, smooth=1):
        inputs = F.sigmoid(inputs)
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        intersection = (inputs*targets).sum()
        dice = (2.*intersection+smooth)/(inputs.sum() + targets.sum() + smooth)
        return 1-dice
# 使用方法
criterion = DiceLoss()
inputs = torch.randn(3, 1)
targets = torch.randn(3, 1)
loss = criterion(inputs, targets)
print(loss)
tensor(0.4260)

除此之外,常见的损失函数还有

  • BCE-Dice Loss
  • Jaccard/Intersection over Union (IoU) Loss
  • Focal Loss
class DiceBCELoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(DiceBCELoss, self).__init__()
    
    def forward(self, inputs, targets, smooth=1):
        inputs = F.sigmoid(inputs)
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        intersection = (inputs*targets).sum()
        dice_loss = 1-(2.*intersection+smooth)/(inputs.sum() + targets.sum() + smooth)
        BCE = F.binary_cross_entropy(inputs, targets, reduction='mean')
        Dice_BCE = BCE + dice_loss
        return Dice_BCE
class IoULoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(IoULoss, self).__init__()
        
    def forward(self, inputs, targets, smooth=1):
        inputs = F.sigmoid(inputs)
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        intersection = (inputs * targets).sum()
        total = (inputs + targets).sum()
        
        union = total - intersection
        
        IoU = (intersection + smooth)/(union + smooth)
        return 1 - IoU
ALPHA = 0.8
GAMMA = 2


class FocalLoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(FocalLoss, self).__init__()
    
    def forward(self, inputs, targets, alpha=ALPHA, gamma=GAMMA, smooth=1):
        inputs = F.sigmoid(inputs)
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        BCE = F.binary_cross_entropy(inputs, targets, reduction='mean')
        BCE_EXP = torch.exp(-BCE)
        focal_loss = alpha * (1-BCE_EXP)**gamma *BCE
        return focal_loss

注意

在自定义损失函数时,涉及到数学运算时,我们最好全程使用PyTorch提供的张量计算接口,这样就不需要我们实现自动求导功能并且我们可以直接调用cuda,使用numpy或者scipy的数学运算时,操作会有些麻烦,大家可以自己下去进行探索。关于PyTorch使用Class定义损失函数的原因,可以参考PyTorch的讨论区(链接6)

本节参考


动态调整学习率

学习率的选择是深度学习中一个困扰人们许久的问题:

  • 学习速率设置过小,会极大降低收敛速度,增加训练时间
  • 学习率太大,可能导致参数在最优解两侧来回振荡

但是当我们选定了一个合适的学习率后,经过许多轮的训练后,可能会出现准确率震荡或loss不再下降等情况,说明当前学习率已不能满足模型调优的需求。

此时我们就可以通过一个适当的学习率衰减策略来改善这种现象,提高精度

这种设置方式在PyTorch中被称为scheduler,也是本节所研究的对象。

使用官方scheduler

了解官方提供的API

在训练神经网络的过程中,学习率是最重要的超参数之一

作为当前较为流行的深度学习框架,PyTorch已经在torch.optim.lr_scheduler封装好了一些动态调整学习率的方法供我们使用,如下面列出的这些scheduler:

lr_scheduler.LambdaLR
torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=- 1, verbose=False)

为不同参数组设定不同学习率调整策略。调整规则为:

l r = b a s e _ l r ∗ l m b d a ( s e l f . l a s t _ e p o c h ) lr=base\_lr∗lmbda(self.last\_epoch) lr=base_lrlmbda(self.last_epoch)

fine-tune 中十分有用,不仅可为不同的层设定不同的学习率,还可以为其设定不同的学习率调整策略。

在这里插入图片描述

import torch
import torch.optim as optim
from torchvision import models
from matplotlib import pyplot as plt

model = models.resnet152(pretrained=True)

lambda1 = lambda epoch: epoch // 30
lambda2 = lambda epoch: 0.95 ** epoch

optimizer = optim.Adam(params=model.parameters(), lr=0.05)
scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda2)

x = [i for i in range(30)]
y = []

for epoch in range(30):
    lr = scheduler.get_lr()
    y.append(lr)
    print(epoch, scheduler.get_lr()[0])
    scheduler.step()


plt.plot(x, y)
plt.show()
0 0.05
1 0.0475
2 0.045125
3 0.04286875
4 0.0407253125
5 0.038689046874999994
6 0.03675459453124999
7 0.03491686480468749
8 0.03317102156445311
9 0.03151247048623045
10 0.029936846961918936
11 0.028440004613822983
12 0.027018004383131834
13 0.025667104163975243
14 0.02438374895577648
15 0.023164561507987652
16 0.02200633343258827
17 0.020906016760958854
18 0.019860715922910912
19 0.018867680126765363
20 0.017924296120427095
21 0.01702808131440574
22 0.01617667724868545
23 0.01536784338625118
24 0.01459945121693862
25 0.013869478656091689
26 0.013176004723287102
27 0.012517204487122747
28 0.011891344262766609
29 0.011296777049628278

在这里插入图片描述

lr_scheduler.MultiplicativeLR
torch.optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda, last_epoch=- 1, verbose=False)

将每个参数组的学习率乘以指定函数中给出的因子

在这里插入图片描述

import torch
import torch.optim as optim
from torchvision import models
from matplotlib import pyplot as plt

model = models.resnet152(pretrained=True)

lambda1 = lambda epoch: 0.95 ** epoch

optimizer = optim.Adam(params=model.parameters(), lr=0.05)
scheduler = optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda=lambda1)

x = [i for i in range(30)]
y = []

for epoch in range(30):
    lr = scheduler.get_lr()
    y.append(lr)
    print(epoch, scheduler.get_lr()[0])
    scheduler.step()


plt.plot(x, y)
plt.show()
0 0.05
1 0.045125
2 0.038689046874999994
3 0.03151247048623046
4 0.02438374895577648
5 0.017924296120427095
6 0.012517204487122745
7 0.008304169199380356
8 0.0052336977361627495
9 0.0031336081634489154
10 0.0017823966125280108
11 0.0009631359897952225
12 0.0004944182354829474
13 0.00024111540266348336
14 0.0001117066515104294
15 4.916507639552899e-05
16 2.055691822357503e-05
17 8.16550226177868e-06
18 3.081281694992385e-06
19 1.1045961106299006e-06
20 3.761830478276453e-07
21 1.2170783502269244e-07
22 3.7407738950168654e-08
23 1.0922649198779242e-08
24 3.0298289986089136e-09
25 7.984208239284657e-10
26 1.998799343977926e-10
27 4.7536822222867136e-11
28 1.07402576459909e-11
29 2.305275625564168e-12

在这里插入图片描述

lr_scheduler.StepLR
torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=- 1, verbose=False)

每个step_size时间步长后使每个参数组的学习率降低。注意,这种衰减可以与此调度程序外部对学习率的其他更改同时发生。当last_epoch=-1时,将初始lr设置为lr。

等间隔调整学习率,调整倍数为 gamma 倍,调整间隔为 step_size。间隔单位是step。需要注意的是, step 通常是指 epoch,不要弄成 iteration

import torch
import torch.optim as optim
from torchvision import models
from matplotlib import pyplot as plt

model = models.resnet152(pretrained=True)

# lr_scheduler.StepLR()
# Assuming optimizer uses lr = 0.05 for all groups
# lr = 0.05     if epoch < 30
# lr = 0.005    if 30 <= epoch < 60
# lr = 0.0005   if 60 <= epoch < 90


optimizer = optim.Adam(params=model.parameters(), lr=0.05)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=30, gamma=0.1)

x = [i for i in range(100)]
y = []

for epoch in range(100):
    scheduler.step()
    lr = scheduler.get_lr()
    print(epoch, scheduler.get_lr()[0])
    y.append(lr)


plt.plot(x, y)
plt.show()
0 0.05
1 0.05
2 0.05
3 0.05
4 0.05
5 0.05
6 0.05
7 0.05
8 0.05
9 0.05
10 0.05
11 0.05
12 0.05
13 0.05
14 0.05
15 0.05
16 0.05
17 0.05
18 0.05
19 0.05
20 0.05
21 0.05
22 0.05
23 0.05
24 0.05
25 0.05
26 0.05
27 0.05
28 0.05
29 0.0005000000000000001
30 0.005000000000000001
31 0.005000000000000001
32 0.005000000000000001
33 0.005000000000000001
34 0.005000000000000001
35 0.005000000000000001
36 0.005000000000000001
37 0.005000000000000001
38 0.005000000000000001
39 0.005000000000000001
40 0.005000000000000001
41 0.005000000000000001
42 0.005000000000000001
43 0.005000000000000001
44 0.005000000000000001
45 0.005000000000000001
46 0.005000000000000001
47 0.005000000000000001
48 0.005000000000000001
49 0.005000000000000001
50 0.005000000000000001
51 0.005000000000000001
52 0.005000000000000001
53 0.005000000000000001
54 0.005000000000000001
55 0.005000000000000001
56 0.005000000000000001
57 0.005000000000000001
58 0.005000000000000001
59 5.0000000000000016e-05
60 0.0005000000000000001
61 0.0005000000000000001
62 0.0005000000000000001
63 0.0005000000000000001
64 0.0005000000000000001
65 0.0005000000000000001
66 0.0005000000000000001
67 0.0005000000000000001
68 0.0005000000000000001
69 0.0005000000000000001
70 0.0005000000000000001
71 0.0005000000000000001
72 0.0005000000000000001
73 0.0005000000000000001
74 0.0005000000000000001
75 0.0005000000000000001
76 0.0005000000000000001
77 0.0005000000000000001
78 0.0005000000000000001
79 0.0005000000000000001
80 0.0005000000000000001
81 0.0005000000000000001
82 0.0005000000000000001
83 0.0005000000000000001
84 0.0005000000000000001
85 0.0005000000000000001
86 0.0005000000000000001
87 0.0005000000000000001
88 0.0005000000000000001
89 5.000000000000002e-06
90 5.0000000000000016e-05
91 5.0000000000000016e-05
92 5.0000000000000016e-05
93 5.0000000000000016e-05
94 5.0000000000000016e-05
95 5.0000000000000016e-05
96 5.0000000000000016e-05
97 5.0000000000000016e-05
98 5.0000000000000016e-05
99 5.0000000000000016e-05

在这里插入图片描述

lr_scheduler.MultiStepLR
torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=- 1, verbose=False)

与StepLR的区别是,调节的epoch是自己定义,无须一定是[30, 60, 90] 这种等差数列;

这种衰减是由外部的设置来更改的。 当last_epoch=-1时,将初始LR设置为LR。

import torch
import torch.optim as optim
from torchvision import models
from matplotlib import pyplot as plt

model = models.resnet152(pretrained=True)

# lr_scheduler.StepLR()
# Assuming optimizer uses lr = 0.05 for all groups
# lr = 0.05     if epoch < 30
# lr = 0.005    if 30 <= epoch < 80
# lr = 0.0005   if epoch >= 80


optimizer = optim.Adam(params=model.parameters(), lr=0.05)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[30, 80], gamma=0.1)

x = [i for i in range(100)]
y = []

for epoch in range(100):
    scheduler.step()
    lr = scheduler.get_lr()
    print(epoch, scheduler.get_lr()[0])
    y.append(lr)


plt.plot(x, y)
plt.show()
0 0.05
1 0.05
2 0.05
3 0.05
4 0.05
5 0.05
6 0.05
7 0.05
8 0.05
9 0.05
10 0.05
11 0.05
12 0.05
13 0.05
14 0.05
15 0.05
16 0.05
17 0.05
18 0.05
19 0.05
20 0.05
21 0.05
22 0.05
23 0.05
24 0.05
25 0.05
26 0.05
27 0.05
28 0.05
29 0.0005000000000000001
30 0.005000000000000001
31 0.005000000000000001
32 0.005000000000000001
33 0.005000000000000001
34 0.005000000000000001
35 0.005000000000000001
36 0.005000000000000001
37 0.005000000000000001
38 0.005000000000000001
39 0.005000000000000001
40 0.005000000000000001
41 0.005000000000000001
42 0.005000000000000001
43 0.005000000000000001
44 0.005000000000000001
45 0.005000000000000001
46 0.005000000000000001
47 0.005000000000000001
48 0.005000000000000001
49 0.005000000000000001
50 0.005000000000000001
51 0.005000000000000001
52 0.005000000000000001
53 0.005000000000000001
54 0.005000000000000001
55 0.005000000000000001
56 0.005000000000000001
57 0.005000000000000001
58 0.005000000000000001
59 0.005000000000000001
60 0.005000000000000001
61 0.005000000000000001
62 0.005000000000000001
63 0.005000000000000001
64 0.005000000000000001
65 0.005000000000000001
66 0.005000000000000001
67 0.005000000000000001
68 0.005000000000000001
69 0.005000000000000001
70 0.005000000000000001
71 0.005000000000000001
72 0.005000000000000001
73 0.005000000000000001
74 0.005000000000000001
75 0.005000000000000001
76 0.005000000000000001
77 0.005000000000000001
78 0.005000000000000001
79 5.0000000000000016e-05
80 0.0005000000000000001
81 0.0005000000000000001
82 0.0005000000000000001
83 0.0005000000000000001
84 0.0005000000000000001
85 0.0005000000000000001
86 0.0005000000000000001
87 0.0005000000000000001
88 0.0005000000000000001
89 0.0005000000000000001
90 0.0005000000000000001
91 0.0005000000000000001
92 0.0005000000000000001
93 0.0005000000000000001
94 0.0005000000000000001
95 0.0005000000000000001
96 0.0005000000000000001
97 0.0005000000000000001
98 0.0005000000000000001
99 0.0005000000000000001

在这里插入图片描述

lr_scheduler.ExponentialLR
torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=- 1, verbose=False)

指数形式增长-有序调整

import torch
import torch.optim as optim
from torchvision import models
from matplotlib import pyplot as plt

model = models.resnet152(pretrained=True)


optimizer = optim.Adam(params=model.parameters(), lr=0.2)
scheduler = optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.2)

x = [i for i in range(10)]
y = []

for epoch in range(10):
    scheduler.step()
    lr = scheduler.get_lr()
    print(epoch, scheduler.get_lr()[0])
    y.append(lr)


plt.plot(x, y)
plt.show()
0 0.008000000000000002
1 0.0016000000000000005
2 0.00032000000000000013
3 6.400000000000002e-05
4 1.2800000000000006e-05
5 2.5600000000000013e-06
6 5.120000000000002e-07
7 1.0240000000000006e-07
8 2.0480000000000012e-08
9 4.096000000000002e-09

在这里插入图片描述

lr_scheduler.CosineAnnealingLR
torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=- 1, verbose=False)
  • optimizer (Optimizer)
  • T_max (int) – 最大迭代次数
  • eta_min (float)- 最小学习率
  • last_epoch (int) – The index of last epoch. Default: -1.
  • verbose (bool) – If True, prints a message to stdout for each update. Default: False.

以余弦函数为周期,并在每个周期最大值时重新设置学习率。以初始学习率为最大学习率,以 2 ∗ T m a x 2∗Tmax 2Tmax为周期,在一个周期内先下降,后上升

在这里插入图片描述

当last_epoch=-1时,将初始lr设置为lr。注意,由于调度是递归定义的,所以其他操作符可以在此调度程序之外同时修改学习率。如果学习速率仅由该调度程序设置。利用cos曲线降低学习率,该方法来源SGDR,学习率变换如下公式:

在这里插入图片描述

import torch
import torch.optim as optim
from torchvision import models
from matplotlib import pyplot as plt

model = models.resnet152(pretrained=True)


optimizer = optim.Adam(params=model.parameters(), lr=1)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=5)

x = [i for i in range(40)]
y = []

for epoch in range(40):
    scheduler.step()
    lr = scheduler.get_lr()
    print(epoch, scheduler.get_lr()[0])
    y.append(lr)


plt.plot(x, y)
plt.show()
0 0.8181356214843422
1 0.47360679774997894
2 0.1823725421878943
3 0.026393202250021047
4 0.0
5 0.19098300562505255
6 1.250000000000001
7 1.239918693812443
8 1.2500000000000009
9 1.1055728090000845
10 0.8181356214843432
11 0.4736067977499792
12 0.18237254218789453
13 0.02639320225002108
14 0.0
15 0.19098300562505255
16 1.2500000000000013
17 1.2399186938124438
18 1.2500000000000013
19 1.105572809000086
20 0.8181356214843435
21 0.4736067977499812
22 0.1823725421878945
23 0.026393202250021133
24 0.0
25 0.19098300562505255
26 1.250000000000002
27 1.2399186938124442
28 1.2500000000000022
29 1.1055728090000865
30 0.8181356214843439
31 0.47360679774998016
32 0.18237254218789495
33 0.026393202250021196
34 0.0
35 0.19098300562505255
36 1.2500000000000027
37 1.239918693812445
38 1.250000000000003
39 1.105572809000087

在这里插入图片描述

lr_scheduler.ReduceLROnPlateau
torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False)

动态衰减lr,在指标停止改善时降低学习率。

一旦学习停滞,模型通常会使学习率降低2-10倍。该调度程序读取度量数量,如果没有看到“耐心”的时期的改进,则会减少学习率。

import torchvision.models as models
import torch.nn as nn
model = models.resnet152(pretrained=True)
fc_features = model.fc.in_features
model.fc = nn.Linear(fc_features, 2)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(params = model.parameters(), lr=10)

scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min')

inputs = torch.randn(4, 3, 224,224)
labels = torch.LongTensor([1, 1, 0, 1])
plt.figure()
x = list(range(60))
y = []

for epoch in range(60):
    optimizer.zero_grad()
    outputs = model(inputs)
    #print(outputs)
    loss = criterion(outputs, labels)
    print(loss)
    
    loss.backward()
    scheduler.step(loss) 
    optimizer.step()
    
    lr = optimizer.param_groups[0]['lr']
    print(epoch, lr)
    
    y.append(lr)

plt.plot(x,y)
tensor(0.8945, grad_fn=<NllLossBackward>)
0 10
tensor(647.1326, grad_fn=<NllLossBackward>)
1 10
tensor(15050.2139, grad_fn=<NllLossBackward>)
2 10
tensor(1046223.8125, grad_fn=<NllLossBackward>)
3 10
tensor(198599.5156, grad_fn=<NllLossBackward>)
4 10
tensor(2312.0481, grad_fn=<NllLossBackward>)
5 10
tensor(17634.8164, grad_fn=<NllLossBackward>)
6 10
tensor(8.2998, grad_fn=<NllLossBackward>)
7 10
tensor(7.0498, grad_fn=<NllLossBackward>)
8 10
tensor(5.7998, grad_fn=<NllLossBackward>)
9 10
tensor(4.5498, grad_fn=<NllLossBackward>)
10 10
tensor(3.2998, grad_fn=<NllLossBackward>)
11 1.0
tensor(3.1748, grad_fn=<NllLossBackward>)
12 1.0
tensor(3.0498, grad_fn=<NllLossBackward>)
13 1.0
tensor(2.9248, grad_fn=<NllLossBackward>)
14 1.0
tensor(2.7998, grad_fn=<NllLossBackward>)
15 1.0
tensor(2.6748, grad_fn=<NllLossBackward>)
16 1.0
tensor(2.5498, grad_fn=<NllLossBackward>)
17 1.0
tensor(2.4249, grad_fn=<NllLossBackward>)
18 1.0
tensor(2.2999, grad_fn=<NllLossBackward>)
19 1.0
tensor(2.1750, grad_fn=<NllLossBackward>)
20 1.0
tensor(2.0502, grad_fn=<NllLossBackward>)
21 1.0
tensor(1.9254, grad_fn=<NllLossBackward>)
22 0.1
tensor(1.9129, grad_fn=<NllLossBackward>)
23 0.1
tensor(1.9005, grad_fn=<NllLossBackward>)
24 0.1
tensor(1.8880, grad_fn=<NllLossBackward>)
25 0.1
tensor(1.8755, grad_fn=<NllLossBackward>)
26 0.1
tensor(1.8631, grad_fn=<NllLossBackward>)
27 0.1
tensor(1.8506, grad_fn=<NllLossBackward>)
28 0.1
tensor(1.8382, grad_fn=<NllLossBackward>)
29 0.1
tensor(1.8257, grad_fn=<NllLossBackward>)
30 0.1
tensor(1.8133, grad_fn=<NllLossBackward>)
31 0.1
tensor(1.8008, grad_fn=<NllLossBackward>)
32 0.1
tensor(1.7884, grad_fn=<NllLossBackward>)
33 0.010000000000000002
tensor(1.7871, grad_fn=<NllLossBackward>)
34 0.010000000000000002
tensor(1.7859, grad_fn=<NllLossBackward>)
35 0.010000000000000002
tensor(1.7847, grad_fn=<NllLossBackward>)
36 0.010000000000000002
tensor(1.7834, grad_fn=<NllLossBackward>)
37 0.010000000000000002
tensor(1.7822, grad_fn=<NllLossBackward>)
38 0.010000000000000002
tensor(1.7809, grad_fn=<NllLossBackward>)
39 0.010000000000000002
tensor(1.7797, grad_fn=<NllLossBackward>)
40 0.010000000000000002
tensor(1.7784, grad_fn=<NllLossBackward>)
41 0.010000000000000002
tensor(1.7772, grad_fn=<NllLossBackward>)
42 0.010000000000000002
tensor(1.7759, grad_fn=<NllLossBackward>)
43 0.010000000000000002
tensor(1.7747, grad_fn=<NllLossBackward>)
44 0.0010000000000000002
tensor(1.7746, grad_fn=<NllLossBackward>)
45 0.0010000000000000002
tensor(1.7745, grad_fn=<NllLossBackward>)
46 0.0010000000000000002
tensor(1.7743, grad_fn=<NllLossBackward>)
47 0.0010000000000000002
tensor(1.7742, grad_fn=<NllLossBackward>)
48 0.0010000000000000002
tensor(1.7741, grad_fn=<NllLossBackward>)
49 0.0010000000000000002
tensor(1.7740, grad_fn=<NllLossBackward>)
50 0.0010000000000000002
tensor(1.7738, grad_fn=<NllLossBackward>)
51 0.0010000000000000002
tensor(1.7737, grad_fn=<NllLossBackward>)
52 0.0010000000000000002
tensor(1.7736, grad_fn=<NllLossBackward>)
53 0.0010000000000000002
tensor(1.7735, grad_fn=<NllLossBackward>)
54 0.0010000000000000002
tensor(1.7733, grad_fn=<NllLossBackward>)
55 0.00010000000000000003
tensor(1.7733, grad_fn=<NllLossBackward>)
56 0.00010000000000000003
tensor(1.7733, grad_fn=<NllLossBackward>)
57 0.00010000000000000003
tensor(1.7733, grad_fn=<NllLossBackward>)
58 0.00010000000000000003
tensor(1.7733, grad_fn=<NllLossBackward>)
59 0.00010000000000000003

在这里插入图片描述

lr_scheduler.CyclicLR
torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=- 1, verbose=False)

根据循环学习速率策略(CLR)设置各参数组的学习速率。该策略以恒定的频率循环两个边界之间的学习率,论文 Cyclical Learning Rates for Training Neural Networks中进行详细介绍。两个边界之间的距离可以按每次迭代或每次循环进行缩放。

循环学习率策略会在每批batch数据之后改变学习率。该类的step()函数应在使用批处理进行训练后调用。

该类有三个内置策略:

  • triangular:一个基本的三角形周期w,无振幅缩放
  • triangular2:一种基本的三角形周期,每个周期的初始振幅乘以一半
  • exp_range:在每次循环迭代时,初始振幅按(循环迭代)缩放的循环

在这里插入图片描述

import torch
import torch.optim as optim
from torchvision import models
from matplotlib import pyplot as plt

model = models.resnet152(pretrained=True)


optimizer = optim.SGD(params=model.parameters(), lr=10, momentum=0.9)
scheduler = optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.1, max_lr=1e-8, step_size_up=10, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9)

x = [i for i in range(100)]
y = []

for epoch in range(100):
    scheduler.step()
    lr = scheduler.get_lr()
    print(epoch, scheduler.get_lr()[0])
    y.append(lr)


plt.plot(x, y)
plt.show()
0 0.090000001
1 0.08000000199999999
2 0.07000000300000002
3 0.06000000400000001
4 0.050000005
5 0.04000000599999999
6 0.03000000699999998
7 0.020000008000000014
8 0.010000009000000004
9 9.999999994736442e-09
10 0.010000009000000004
11 0.020000008000000014
12 0.03000000699999998
13 0.04000000599999999
14 0.050000005
15 0.06000000400000001
16 0.07000000300000002
17 0.08000000199999999
18 0.090000001
19 0.1
20 0.09000000100000004
21 0.08000000199999999
22 0.07000000300000002
23 0.06000000399999997
24 0.050000005
25 0.04000000600000003
26 0.03000000699999998
27 0.020000008000000014
28 0.010000008999999963
29 9.999999994736442e-09
30 0.010000008999999963
31 0.020000008000000014
32 0.03000000699999998
33 0.04000000600000003
34 0.050000005
35 0.06000000399999997
36 0.07000000300000002
37 0.08000000199999999
38 0.09000000100000004
39 0.1
40 0.09000000100000004
41 0.08000000199999999
42 0.07000000300000002
43 0.06000000399999997
44 0.050000005
45 0.04000000600000003
46 0.03000000699999998
47 0.020000008000000014
48 0.010000008999999963
49 9.999999994736442e-09
50 0.010000008999999963
51 0.020000008000000014
52 0.03000000699999998
53 0.04000000600000003
54 0.050000005
55 0.06000000399999997
56 0.07000000300000002
57 0.08000000199999999
58 0.09000000100000004
59 0.1
60 0.09000000100000004
61 0.08000000200000007
62 0.07000000299999994
63 0.06000000399999997
64 0.050000005
65 0.04000000600000003
66 0.030000007000000065
67 0.02000000799999993
68 0.010000008999999963
69 9.999999994736442e-09
70 0.010000008999999963
71 0.02000000799999993
72 0.030000007000000065
73 0.04000000600000003
74 0.050000005
75 0.06000000399999997
76 0.07000000299999994
77 0.08000000200000007
78 0.09000000100000004
79 0.1
80 0.09000000100000004
81 0.08000000200000007
82 0.07000000299999994
83 0.06000000399999997
84 0.050000005
85 0.04000000600000003
86 0.030000007000000065
87 0.02000000799999993
88 0.010000008999999963
89 9.999999994736442e-09
90 0.010000008999999963
91 0.02000000799999993
92 0.030000007000000065
93 0.04000000600000003
94 0.050000005
95 0.06000000399999997
96 0.07000000299999994
97 0.08000000200000007
98 0.09000000100000004
99 0.1

在这里插入图片描述

lr_scheduler.OneCycleLR
torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, total_steps=None, epochs=None, steps_per_epoch=None, pct_start=0.3, anneal_strategy='cos', cycle_momentum=True, base_momentum=0.85, max_momentum=0.95, div_factor=25.0, final_div_factor=10000.0, three_phase=False, last_epoch=- 1, verbose=False)

根据1cycle学习率政策设置每个参数组的学习率。

1cycle政策将学习率从初始学习率退化到一些最大的学习率,然后从最高学习率到一些最低学习率远低于初始学习率。

本策略最初在< Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.>

每次批次后,1cyvle学习率政策会改变学习率。在批量用于训练后应该调用步骤。

import torch
import torch.optim as optim
from torchvision import models
from matplotlib import pyplot as plt

model = models.resnet152(pretrained=True)


optimizer = optim.SGD(params=model.parameters(), lr=0.1, momentum=0.9)
scheduler = optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.9, steps_per_epoch=100, epochs=10)

x = [i for i in range(1000)]
y = []

batch = 10

for epoch in range(100):
    for b in range(batch):
        scheduler.step()
        lr = scheduler.get_lr()
        print(epoch, scheduler.get_lr()[0])
        y.append(lr)


plt.plot(x, y)
plt.show()
0 0.036023845537951016
0 0.03609537951935193
0 0.03621459404713634
0 0.03638147596049668
0 0.03659600683633524
0 0.03685816299129985
0 0.03716791548439702
0 0.037525230120187936
0 0.037930067452562666
0 0.03838238278909567
1 0.03888212619597864
1 0.03942924250353408
1 0.04002367131230522
1 0.040665346999723684
1 0.041354198727354796
1 0.04209015044871689
1 0.04287312091767681
1 0.043703023697420074
1 0.04457976716999157
1 0.04550325454641113
2 0.046473383877358
2 0.047490048064425694
2 0.04855313487194557
2 0.04966252693937678
2 0.050818101794262494
2 0.05201973186575071
2 0.053267284498677236
2 0.05456062196821032
2 0.05589960149505491
2 0.05728407526121526
3 0.0587138904263127
3 0.060188889144459634
3 0.061708908581683986
3 0.06327378093390634
3 0.0648833334454646
3 0.06653738842818469
3 0.06823576328099812
3 0.0699782705100993
3 0.07176471774964432
3 0.0735949077829876
4 0.07546863856445374
4 0.07738570324164251
4 0.07934589017826421
4 0.08134898297750459
4 0.08339476050591232
4 0.08548299691781336
4 0.08761346168024164
4 0.08978591959839011
4 0.09200013084157488
4 0.09425585096971134
5 0.09655283096029987
5 0.09889081723591697
5 0.10126955169220841
5 0.10368877172638369
5 0.10614821026620613
5 0.10864759579947636
5 0.1111866524040066
5 0.11376509977808091
5 0.11638265327139974
5 0.119039023916504
6 0.12173391846067594
6 0.12446703939831305
6 0.12723808500377187
6 0.13004674936467642
6 0.1328927224156906
6 0.13577568997274791
6 0.1386953337677358
6 0.1416513314836314
6 0.14464335679008433
6 0.1476710793794418
7 0.15073416500321335
7 0.15383227550897094
7 0.15696506887767925
7 0.16013219926145295
7 0.16333331702173792
7 0.16656806876790875
7 0.1698360973962827
7 0.1731370421295414
7 0.1764705385565607
7 0.17983621867263866
8 0.18323371092012297
8 0.18666264022942847
8 0.19012262806044455
8 0.19361329244432302
8 0.19713424802564739
8 0.20068510610497325
8 0.20426547468173983
8 0.2078749584975459
8 0.21151315907978308
8 0.21517967478562738
9 0.21887410084637737
9 0.22259602941214074
9 0.22634504959685764
9 0.23012074752366207
9 0.23392270637057133
9 0.23775050641650253
9 0.24160372508760697
9 0.24548193700392174
9 0.24938471402632878
9 0.2533116253038209
10 0.257262237321065
10 0.26123611394626134
10 0.2652328164792901
10 0.2692519037001423
10 0.2732929319176288
10 0.277355455018362
10 0.2814390245160049
10 0.2855431896007825
10 0.2896674971892489
10 0.29381149197430645
11 0.29797471647546925
11 0.3021567110893679
11 0.3063570141404871
11 0.3105751619321332
11 0.31481068879762464
11 0.31906312715169927
11 0.3233320075421341
11 0.32761685870157087
11 0.3319172075995426
11 0.3362325794946932
12 0.3405624979871882
12 0.3449064850713065
12 0.34926406118821063
12 0.35363474527888816
12 0.3580180548372591
12 0.3624135059634416
12 0.3668206134171734
12 0.3712388906713804
12 0.37566784996588676
12 0.38010700236126216
13 0.3845558577927989
13 0.3890139251246124
13 0.3934807122038614
13 0.397955725915079
13 0.40243847223461104
13 0.40692845628515445
13 0.4114251823903895
13 0.4159281541297005
13 0.42043687439297844
13 0.4249508454355005
14 0.4294695689328786
14 0.43399254603607285
14 0.43851927742646213
14 0.4430492633709669
14 0.447582003777218
14 0.45211699824876445
14 0.4566537461403156
14 0.46119174661301054
14 0.4657304986897078
14 0.4702695013102922
15 0.47480825338698945
15 0.4793462538596843
15 0.4838830017512355
15 0.48841799622278204
15 0.49295073662903305
15 0.4974807225735378
15 0.5020074539639272
15 0.5065304310671215
15 0.5110491545644995
15 0.5155631256070216
16 0.5200718458702995
16 0.5245748176096103
16 0.5290715437148454
16 0.5335615277653889
16 0.538044274084921
16 0.5425192877961386
16 0.5469860748753874
16 0.5514441422072011
16 0.5558929976387377
16 0.5603321500341133
17 0.5647611093286196
17 0.5691793865828265
17 0.5735864940365583
17 0.5779819451627408
17 0.5823652547211118
17 0.5867359388117895
17 0.5910935149286937
17 0.595437502012812
17 0.599767420505307
17 0.6040827924004575
18 0.6083831412984291
18 0.6126679924578661
18 0.6169368728483009
18 0.6211893112023754
18 0.6254248380678669
18 0.629642985859513
18 0.633843288910632
18 0.6380252835245307
18 0.6421885080256935
18 0.6463325028107512
19 0.6504568103992174
19 0.6545609754839952
19 0.658644544981638
19 0.6627070680823712
19 0.6667480962998578
19 0.6707671835207099
19 0.6747638860537386
19 0.678737762678935
19 0.6826883746961792
19 0.6866152859736713
20 0.6905180629960783
20 0.6943962749123929
20 0.6982494935834974
20 0.7020772936294286
20 0.705879252476338
20 0.7096549504031424
20 0.7134039705878593
20 0.7171258991536226
20 0.7208203252143727
20 0.7244868409202169
21 0.728125041502454
21 0.7317345253182601
21 0.7353148938950269
21 0.7388657519743526
21 0.7423867075556768
21 0.7458773719395554
21 0.7493373597705715
21 0.7527662890798771
21 0.7561637813273613
21 0.7595294614434394
22 0.7628629578704585
22 0.7661639026037174
22 0.7694319312320912
22 0.7726666829782621
22 0.7758678007385471
22 0.7790349311223208
22 0.7821677244910291
22 0.7852658349967867
22 0.7883289206205583
22 0.7913566432099157
23 0.7943486685163687
23 0.7973046662322643
23 0.8002243100272521
23 0.8031072775843094
23 0.8059532506353236
23 0.8087619149962282
23 0.8115329606016868
23 0.8142660815393241
23 0.816960976083496
23 0.8196173467286003
24 0.8222349002219191
24 0.8248133475959935
24 0.8273524042005237
24 0.8298517897337939
24 0.8323112282736164
24 0.8347304483077916
24 0.8371091827640831
24 0.8394471690397001
24 0.8417441490302887
24 0.8439998691584252
25 0.84621408040161
25 0.8483865383197584
25 0.8505170030821867
25 0.8526052394940876
25 0.8546510170224955
25 0.8566541098217357
25 0.8586142967583575
25 0.8605313614355463
25 0.8624050922170124
25 0.8642352822503556
26 0.8660217294899007
26 0.8677642367190018
26 0.8694626115718153
26 0.8711166665545355
26 0.8727262190660936
26 0.8742910914183161
26 0.8758111108555404
26 0.8772861095736872
26 0.8787159247387848
26 0.880100398504945
27 0.8814393780317897
27 0.8827327155013227
27 0.8839802681342492
27 0.8851818982057376
27 0.8863374730606234
27 0.8874468651280545
27 0.8885099519355744
27 0.889526616122642
27 0.8904967454535889
27 0.8914202328300085
28 0.89229697630258
28 0.8931268790823231
28 0.8939098495512833
28 0.8946458012726453
28 0.8953346530002764
28 0.895976328687695
28 0.896570757496466
28 0.8971178738040214
28 0.8976176172109045
28 0.8980699325474374
29 0.8984747698798121
29 0.898832084515603
29 0.8991418370087002
29 0.8994039931636648
29 0.8996185240395035
29 0.8997854059528637
29 0.8999046204806482
29 0.899976154462049
29 0.9
29 0.8999954680645302
30 0.8999818723494029
30 0.8999592131284634
30 0.899927490858114
30 0.8998867061773052
30 0.8998368599075225
30 0.8997779530527705
30 0.899709986799552
30 0.8996329625168444
30 0.8995468817560719
30 0.8994517462510746
31 0.8993475579180733
31 0.8992343188556311
31 0.899112031344611
31 0.8989806978481302
31 0.8988403210115099
31 0.8986909036622225
31 0.8985324488098348
31 0.898364959645947
31 0.8981884395441284
31 0.8980028920598497
32 0.8978083209304114
32 0.8976047300748683
32 0.8973921235939505
32 0.8971705057699815
32 0.8969398810667906
32 0.8967002541296244
32 0.8964516297850528
32 0.8961940130408709
32 0.8959274090859995
32 0.8956518232903797
33 0.895367261204865
33 0.8950737285611096
33 0.8947712312714525
33 0.894459775428799
33 0.8941393673064976
33 0.893810013358214
33 0.8934717202178006
33 0.8931244946991632
33 0.8927683437961239
33 0.8924032746822796
34 0.8920292947108587
34 0.8916464114145712
34 0.891254632505459
34 0.8908539658747389
34 0.8904444195926448
34 0.8900260019082642
34 0.8895987212493729
34 0.8891625862222649
34 0.8887176056115786
34 0.8882637883801211
35 0.8878011436686858
35 0.8873296807958699
35 0.8868494092578866
35 0.8863603387283724
35 0.8858624790581939
35 0.8853558402752486
35 0.884840432584263
35 0.884316266366587
35 0.8837833521799847
35 0.8832417007584226
36 0.8826913230118519
36 0.8821322300259902
36 0.8815644330620973
36 0.8809879435567487
36 0.8804027731216053
36 0.8798089335431792
36 0.879206436782597
36 0.8785952949753578
36 0.8779755204310898
36 0.8773471256333017
37 0.8767101232391314
37 0.8760645260790918
37 0.8754103471568108
37 0.8747475996487709
37 0.8740762969040429
37 0.8733964524440174
37 0.8727080799621325
37 0.8720111933235977
37 0.8713058065651146
37 0.8705919338945949
38 0.8698695896908732
38 0.8691387885034181
38 0.8683995450520385
38 0.8676518742265882
38 0.8668957910866647
38 0.8661313108613067
38 0.8653584489486873
38 0.8645772209158032
38 0.8637876424981623
38 0.8629897295994653
39 0.8621834982912868
39 0.8613689648127508
39 0.8605461455702035
39 0.8597150571368832
39 0.8588757162525865
39 0.8580281398233309
39 0.8571723449210144
39 0.8563083487830715
39 0.8554361688121264
39 0.854555822575642
40 0.853667327805566
40 0.8527707023979746
40 0.851865964412711
40 0.850953132073022
40 0.8500322237651914
40 0.8491032580381689
40 0.8481662536031972
40 0.8472212293334345
40 0.8462682042635747
40 0.8453071975894642
41 0.8443382286677145
41 0.8433613170153134
41 0.8423764823092308
41 0.8413837443860233
41 0.8403831232414338
41 0.8393746390299895
41 0.8383583120645957
41 0.837334162816126
41 0.8363022119130112
41 0.8352624801408229
42 0.834214988441855
42 0.8331597579147022
42 0.8320968098138344
42 0.8310261655491693
42 0.8299478466856409
42 0.8288618749427651
42 0.8277682721942017
42 0.8266670604673151
42 0.8255582619427295
42 0.8244418989538824
43 0.8233179939865751
43 0.8221865696785191
43 0.8210476488188814
43 0.8199012543478237
43 0.818747409356042
43 0.8175861370843006
43 0.816417460922964
43 0.8152414044115264
43 0.8140579912381365
43 0.8128672452391216
44 0.8116691903985066
44 0.8104638508475314
44 0.8092512508641642
44 0.8080314148726135
44 0.8068043674428352
44 0.8055701332900385
44 0.8043287372741871
44 0.8030802043994995
44 0.8018245598139451
44 0.8005618288087367
45 0.799292036817823
45 0.7980152094173741
45 0.7967313723252681
45 0.7954405514005723
45 0.7941427726430225
45 0.7928380621924989
45 0.7915264463285002
45 0.7902079514696145
45 0.7888826041729862
45 0.7875504311337815
46 0.7862114591846513
46 0.7848657152951899
46 0.7835132265713922
46 0.7821540202551079
46 0.780788123723492
46 0.7794155644884548
46 0.7780363701961058
46 0.776650568626199
46 0.7752581876915713
46 0.7738592554375822
47 0.7724538000415471
47 0.7710418498121715
47 0.769623433188979
47 0.7681985787417402
47 0.7667673151698963
47 0.7653296713019807
47 0.7638856760950395
47 0.7624353586340471
47 0.7609787481313209
47 0.7595158739259331
48 0.7580467654831192
48 0.7565714523936852
48 0.7550899643734105
48 0.7536023312624509
48 0.7521085830247359
48 0.7506087497473668
48 0.7491028616400094
48 0.7475909490342858
48 0.7460730423831641
48 0.7445491722603441
49 0.7430193693596423
49 0.7414836644943733
49 0.739942088596729
49 0.7383946727171556
49 0.7368414480237289
49 0.735282445801525
49 0.7337176974519919
49 0.7321472344923158
49 0.7305710885547868
49 0.7289892913861614
50 0.7274018748470237
50 0.7258088709111429
50 0.7242103116648297
50 0.72260622930629
50 0.7209966561449765
50 0.7193816246009374
50 0.7177611672041637
50 0.7161353165939345
50 0.7145041055181588
50 0.7128675668327159
51 0.7112257335007944
51 0.7095786385922275
51 0.7079263152828271
51 0.7062687968537157
51 0.7046061166906561
51 0.7029383082833787
51 0.701265405224907
51 0.6995874412108813
51 0.6979044500388796
51 0.6962164656077369
52 0.6945235219168626
52 0.6928256530655555
52 0.6911228932523175
52 0.6894152767741637
52 0.6877028380259327
52 0.6859856114995928
52 0.6842636317835485
52 0.6825369335619427
52 0.6808055516139585
52 0.6790695208131188
53 0.6773288761265838
53 0.6755836526144465
53 0.6738338854290269
53 0.6720796098141636
53 0.6703208611045041
53 0.668557674724793
53 0.6667900861891585
53 0.6650181311003974
53 0.663241845149257
53 0.6614612641137176
54 0.6596764238582709
54 0.657887360333198
54 0.6560941095738446
54 0.6542967076998966
54 0.6524951909146509
54 0.6506895955042877
54 0.6488799578371387
54 0.6470663143629544
54 0.6452487016121707
54 0.6434271561951729
55 0.6416017148015579
55 0.6397724141993953
55 0.637939291234487
55 0.6361023828296254
55 0.6342617259838487
55 0.6324173577716965
55 0.630569315342463
55 0.6287176359194481
55 0.6268623567992087
55 0.6250035153508063
56 0.6231411490150549
56 0.6212752953037673
56 0.6194059917989987
56 0.6175332761522903
56 0.6156571860839103
56 0.6137777593820951
56 0.6118950339022873
56 0.6100090475663734
56 0.608119838361921
56 0.6062274443414114
57 0.6043319036214754
57 0.6024332543821248
57 0.6005315348659825
57 0.5986267833775137
57 0.5967190382822533
57 0.5948083380060334
57 0.5928947210342096
57 0.5909782259108854
57 0.5890588912381365
57 0.5871367556752324
58 0.5852118579378582
58 0.5832842367973353
58 0.5813539310798391
58 0.5794209796656183
58 0.5774854214882116
58 0.5755472955336624
58 0.5736066408397346
58 0.5716634964951262
58 0.5697179016386817
58 0.5677698954586039
59 0.5658195171916642
59 0.5638668061224129
59 0.5619118015823878
59 0.5599545429493212
59 0.5579950696463482
59 0.5560334211412111
59 0.5540696369454656
59 0.5521037566136844
59 0.5501358197426603
59 0.5481658659706097
60 0.5461939349763726
60 0.5442200664786145
60 0.5422443002350269
60 0.5402666760415243
60 0.5382872337314454
60 0.5363060131747485
60 0.5343230542772099
60 0.5323383969796193
60 0.5303520812569759
60 0.5283641471176826
61 0.5263746346027406
61 0.5243835837849431
61 0.522391034768067
61 0.5203970276860669
61 0.5184016027022651
61 0.5164048000085435
61 0.514406659824534
61 0.5124072223968081
61 0.5104065279980665
61 0.5084046169263278
62 0.5064015295041165
62 0.5043973060776519
62 0.5023919870160339
62 0.5003856127104316
62 0.498378223573268
62 0.4963698600374071
62 0.4943605625553396
62 0.4923503715983676
62 0.4903393276557891
62 0.4883274712340836
63 0.48631484285609533
63 0.48430148306021725
63 0.4822874323995745
63 0.4802727314412078
63 0.4782574207652559
63 0.4762415409641388
63 0.4742251326417396
63 0.4722082364125871
63 0.47019089290103716
63 0.46817314274045513
64 0.4661550265723973
64 0.4641365850457914
64 0.46211785881611916
64 0.4600988885445966
64 0.45807971489735494
64 0.4560603785446223
64 0.45404092015990377
64 0.45202138041916257
64 0.4500018
64 0.44798221958083756
65 0.4459626798400962
65 0.4439432214553776
65 0.4419238851026452
65 0.43990471145540355
65 0.43788574118388096
65 0.4358670149542086
65 0.43384857342760275
65 0.43183045725954483
65 0.4298127070989631
65 0.4277953635874131
66 0.42577846735826047
66 0.42376205903586134
66 0.42174617923474417
66 0.41973086855879227
66 0.4177161676004255
66 0.4157021169397829
66 0.4136887571439047
66 0.4116761287659165
66 0.409664272344211
66 0.40765322840163254
67 0.4056430374446604
67 0.40363373996259305
67 0.4016253764267322
67 0.3996179872895685
67 0.3976116129839661
67 0.3956062939223482
67 0.39360207049588347
67 0.39159898307367236
67 0.38959707200193366
67 0.387596377603192
68 0.38559694017546603
68 0.38359879999145663
68 0.381601997297735
68 0.3796065723139331
68 0.3776125652319331
68 0.3756200162150571
68 0.37362896539725937
68 0.3716394528823174
68 0.3696515187430242
68 0.3676652030203807
69 0.3656805457227901
69 0.36369758682525155
69 0.3617163662685548
69 0.3597369239584757
69 0.35775929976497317
69 0.35578353352138536
69 0.35380966502362743
69 0.3518377340293905
69 0.34986778025733967
69 0.34789984338631574
70 0.34593396305453444
70 0.34397017885878894
70 0.3420085303536518
70 0.34004905705067867
70 0.33809179841761233
70 0.3361367938775871
70 0.3341840828083359
70 0.33223370454139606
70 0.33028569836131827
70 0.32834010350487375
71 0.3263969591602655
71 0.3244563044663378
71 0.32251817851178854
71 0.3205826203343816
71 0.318649668920161
71 0.3167193632026648
71 0.3147917420621417
71 0.3128668443247678
71 0.3109447087618636
71 0.3090253740891147
72 0.30710887896579053
72 0.3051952619939666
72 0.3032845617177468
72 0.3013768166224865
72 0.2994720651340176
72 0.2975703456178753
72 0.29567169637852464
72 0.2937761556585887
72 0.2918837616380791
72 0.2899945524336265
73 0.2881085660977129
73 0.286225840617905
73 0.2843464139160898
73 0.2824703238477098
73 0.28059760820100127
73 0.2787283046962327
73 0.27686245098494516
73 0.27500008464919395
73 0.27314124320079136
73 0.2712859640805519
74 0.2694342846575371
74 0.2675862422283035
74 0.26574187401615146
74 0.26390121717037474
74 0.262064308765513
74 0.2602311858006049
74 0.2584018851984421
74 0.256576443804827
74 0.2547548983878293
74 0.2529372856370457
75 0.2511236421628615
75 0.2493140044957122
75 0.24750840908534905
75 0.2457068923001035
75 0.24390949042615545
75 0.24211623966680218
75 0.24032717614172922
75 0.2385423358862824
75 0.23676175485074308
75 0.23498546889960284
76 0.23321351381084143
76 0.23144592527520702
76 0.22968273889549612
76 0.22792399018583642
76 0.2261697145709732
76 0.22441994738555363
76 0.22267472387341627
76 0.22093407918688127
76 0.21919804838604157
76 0.2174666664380574
77 0.21573996821645158
77 0.21401798850040712
77 0.2123007619740674
77 0.21058832322583632
77 0.20888070674768236
77 0.20717794693444452
77 0.2054800780831376
77 0.2037871343922632
77 0.20209914996112052
77 0.20041615878911878
78 0.1987381947750929
78 0.19706529171662146
78 0.19539748330934395
78 0.19373480314628433
78 0.19207728471717297
78 0.19042496140777243
78 0.1887778664992055
78 0.18713603316728405
78 0.18549949448184128
78 0.1838682834060655
79 0.18224243279583638
79 0.18062197539906266
79 0.17900694385502355
79 0.17739737069371
79 0.1757932883351704
79 0.17419472908885725
79 0.17260172515297642
79 0.17101430861383857
79 0.16943251144521326
79 0.16785636550768426
80 0.166285902548008
80 0.1647211541984751
80 0.16316215197627115
80 0.16160892728284432
80 0.16006151140327105
80 0.1585199355056266
80 0.15698423064035763
80 0.1554544277396561
80 0.15393055761683602
80 0.1524126509657143
81 0.15090073835999077
81 0.14939485025263313
81 0.14789501697526403
81 0.14640126873754924
81 0.14491363562658963
81 0.143432147606315
81 0.14195683451688074
81 0.1404877260740669
81 0.13902485186867908
81 0.13756824136595283
82 0.13611792390496058
82 0.13467392869801936
82 0.13323628483010372
82 0.13180502125825977
82 0.1303801668110211
82 0.12896175018782863
82 0.1275497999584529
82 0.12614434456241808
82 0.12474541230842875
82 0.12335303137380117
83 0.12196722980389427
83 0.12058803551154529
83 0.11921547627650797
83 0.11784957974489224
83 0.11649037342860791
83 0.11513788470481029
83 0.11379214081534877
83 0.11245316886621853
83 0.11112099582701393
83 0.10979564853038543
84 0.1084771536714998
84 0.10716553780750131
84 0.10586082735697758
84 0.10456304859942765
84 0.10327222767473189
84 0.10198839058262588
84 0.10071156318217718
84 0.09944177119126327
84 0.09817904018605504
84 0.09692339560050058
85 0.09567486272581295
85 0.0944334667099616
85 0.09319923255716477
85 0.09197218512738659
85 0.09075234913583588
85 0.08953974915246876
85 0.08833440960149337
85 0.08713635476087839
85 0.08594560876186358
85 0.0847621955884738
86 0.08358613907703608
86 0.08241746291569958
86 0.08125619064395803
86 0.0801023456521764
86 0.07895595118111878
86 0.07781703032148082
86 0.07668560601342511
86 0.07556170104611765
86 0.07444533805727052
86 0.07333653953268494
87 0.0722353278057983
87 0.07114172505723505
87 0.07005575331435923
87 0.06897743445083077
87 0.06790679018616574
87 0.06684384208529796
87 0.06578861155814493
87 0.06474111985917706
87 0.06370138808698875
87 0.06266943718387402
88 0.06164528793540441
88 0.06062896097001043
88 0.05962047675856619
88 0.05861985561397678
88 0.05762711769076911
88 0.05664228298468669
88 0.055665371332285565
88 0.05469640241053582
88 0.053735395736425245
88 0.05278237066656551
89 0.05183734639680274
89 0.050900341961831005
89 0.049971376234808636
89 0.049050467926978036
89 0.04813763558728912
89 0.04723289760202549
89 0.04633627219443399
89 0.045447777424358106
89 0.04456743118787366
89 0.043695251216928535
90 0.04283125507898573
90 0.04197546017666913
90 0.0411278837474135
90 0.040288542863116845
90 0.039457454429796486
90 0.038634635187249286
90 0.0378201017087132
90 0.03701387040053472
90 0.036215957501837835
90 0.03542637908419683
91 0.034645151051312745
91 0.03387228913869334
91 0.033107808913335326
91 0.03235172577341185
91 0.031604054947961494
91 0.03086481149658197
91 0.03013401030912678
91 0.029411666105405163
91 0.02869779343488541
91 0.02799240667640248
92 0.027295520037867647
92 0.026607147555982664
92 0.02592730309595721
92 0.025256000351229147
92 0.024593252843189205
92 0.023939073920908247
92 0.023293476760868428
92 0.022656474366698362
92 0.02202807956891024
92 0.021408305024642187
93 0.020797163217403044
93 0.02019466645682076
93 0.01960082687839475
93 0.01901565644325133
93 0.018439166937902776
93 0.01787136997400984
93 0.017312276988148124
93 0.01676189924157752
93 0.016220247820015295
93 0.015687333633413115
94 0.01516316741573703
94 0.014647759724751283
94 0.014141120941806046
94 0.013643261271627584
94 0.013154190742113462
94 0.012673919204130019
94 0.01220245633131427
94 0.011739811619878967
94 0.0112859943884213
94 0.010841013777735144
95 0.010404878750627171
95 0.009977598091735927
95 0.00955918040735534
95 0.009149634125261197
95 0.008748967494541088
95 0.008357188585428789
95 0.007974305289141437
95 0.007600325317720317
95 0.007235256203876128
95 0.006879105300836759
96 0.006531879782199411
96 0.006193586641786006
96 0.005864232693502408
96 0.005543824571201125
96 0.005232368728547625
96 0.004929871438890486
96 0.0046363387951349954
96 0.004351776709620304
96 0.0040761909140005674
96 0.0038095869591291917
97 0.0035519702149473717
97 0.0033033458703756284
97 0.0030637189332094923
97 0.002833094230018636
97 0.0026114764060495006
97 0.0023988699251317226
97 0.0021952790695886053
97 0.002000707940150287
97 0.0018151604558716108
97 0.0016386403540530337
98 0.0014711511901651904
98 0.0013126963377774997
98 0.0011632789884901693
98 0.0010229021518698468
98 0.0008915686553889694
98 0.0007692811443689595
98 0.0006560420819268169
98 0.0005518537489255094
98 0.0004567182439282091
98 0.00037063748315562726
99 0.0002936132004479442
99 0.00022564694722943833
99 0.00016674009247751078
99 0.00011689382269495772
99 7.610914188608963e-05
99 4.438687153664732e-05
99 2.172765059711549e-05
99 8.131935469832964e-06
99 3.6000000000000003e-06
99 8.131935469832964e-06

在这里插入图片描述

lr_scheduler.CosineAnnealingWarmRestarts
torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=- 1, verbose=False)

使用余弦退火计划设置每个参数组的学习率
在这里插入图片描述

文章 < SGDR: Stochastic Gradient Descent with Warm Restarts.>

import torch
import torch.optim as optim
from torchvision import models
from matplotlib import pyplot as plt

model = models.resnet152(pretrained=True)


optimizer = optim.SGD(params=model.parameters(), lr=0.1, momentum=0.9)
scheduler = optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=1, T_mult=2)

x = [i for i in range(100)]
y = []

batch = 10

for epoch in range(10):
    for b in range(batch):
        scheduler.step(epoch + b/batch)
        lr = scheduler.get_lr()
        print(epoch, scheduler.get_lr()[0])
        y.append(lr)


plt.plot(x, y)
plt.show()
0 0.1
0 0.09755282581475769
0 0.09045084971874738
0 0.07938926261462366
0 0.06545084971874737
0 0.05
0 0.03454915028125263
0 0.02061073738537635
0 0.009549150281252633
0 0.0024471741852423235
1 0.1
1 0.0993844170297569
1 0.09755282581475769
1 0.09455032620941839
1 0.09045084971874738
1 0.08535533905932738
1 0.07938926261462365
1 0.07269952498697733
1 0.06545084971874737
1 0.05782172325201156
2 0.05
2 0.04217827674798845
2 0.034549150281252626
2 0.027300475013022685
2 0.02061073738537635
2 0.014644660940672627
2 0.009549150281252633
2 0.0054496737905816
2 0.002447174185242329
2 0.0006155829702431171
3 0.1
3 0.0998458666866564
3 0.0993844170297569
3 0.09861849601988384
3 0.09755282581475769
3 0.09619397662556434
3 0.09455032620941839
3 0.09263200821770462
3 0.09045084971874738
3 0.08802029828000156
4 0.08535533905932738
4 0.0824724024165092
4 0.07938926261462365
4 0.07612492823579747
4 0.07269952498697733
4 0.0691341716182545
4 0.06545084971874739
4 0.06167226819279527
4 0.05782172325201156
4 0.053922954786392245
5 0.05
5 0.04607704521360777
5 0.04217827674798845
5 0.03832773180720475
5 0.034549150281252626
5 0.030865828381745515
5 0.027300475013022685
5 0.02387507176420256
5 0.02061073738537635
5 0.017527597583490807
6 0.014644660940672627
6 0.011979701719998471
6 0.009549150281252633
6 0.007367991782295392
6 0.0054496737905816
6 0.0038060233744356634
6 0.002447174185242329
6 0.0013815039801161723
6 0.0006155829702431171
6 0.0001541333133436018
7 0.1
7 0.09996145181203615
7 0.0998458666866564
7 0.09965342284774632
7 0.0993844170297569
7 0.09903926402016153
7 0.09861849601988384
7 0.09812276182268237
7 0.09755282581475769
7 0.09690956679612421
8 0.09619397662556434
8 0.09540715869125407
8 0.0945503262094184
8 0.09362480035363985
8 0.09263200821770462
8 0.09157348061512727
8 0.09045084971874738
8 0.08926584654403727
8 0.08802029828000153
8 0.08671612547178427
9 0.08535533905932738
9 0.0839400372766471
9 0.0824724024165092
9 0.08095469746549169
9 0.07938926261462365
9 0.07777851165098011
9 0.07612492823579747
9 0.07443106207484776
9 0.07269952498697733
9 0.0709329868768714

在这里插入图片描述

使用官方API

关于如何使用这些动态调整学习率的策略,PyTorch官方也很人性化的给出了使用实例代码帮助大家理解,我们也将结合官方给出的代码来进行解释。

# 选择一种优化器
optimizer = torch.optim.Adam(...) 
# 选择上面提到的一种或多种动态调整学习率的方法
scheduler1 = torch.optim.lr_scheduler.... 
scheduler2 = torch.optim.lr_scheduler....
...
schedulern = torch.optim.lr_scheduler....
# 进行训练
for epoch in range(100):
    train(...)
    validate(...)
    optimizer.step()
    # 需要在优化器参数更新之后再动态调整学习率
    scheduler1.step() 
    ...
    schedulern.step()

注意

我们在使用官方给出的torch.optim.lr_scheduler时,需要将scheduler.step()放在optimizer.step()后面进行使用。

自定义scheduler

虽然PyTorch官方给我们提供了许多的API,但是在实验中也有可能碰到需要我们自己定义学习率调整策略的情况,

而我们的方法是自定义函数adjust_learning_rate来改变param_grouplr的值,在下面的叙述中会给出一个简单的实现。

假设我们现在正在做实验,需要学习率每30轮下降为原来的1/10,假设已有的官方API中没有符合我们需求的,那就需要自定义函数来实现学习率的改变。

def adjust_learning_rate(optimizer, epoch):
    lr = args.lr * (0.1 * (epoch//30))
    for param_group in optimizer.param_groups:
        param_group['lr'] = lr

有了adjust_learning_rate函数的定义,在训练的过程就可以调用我们的函数来实现学习率的动态变化

def adjust_learning_rate(optimizer,...):
    ...
    
  
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, momentum=0.9)
for epoch in range(10):
    train(...)
    validate(...)
    adjust_learning_rate(optimizer,epoch)

参考


模型微调

随着深度学习的发展,模型的参数越来越大,许多开源模型都是在较大数据集上进行训练的,比如Imagenet-1k,Imagenet-11k,甚至是ImageNet-21k等。但在实际应用中,我们的数据集可能只有几千张,这时从头开始训练具有几千万参数的大型神经网络是不现实的,因为越大的模型对数据量的要求越大,过拟合无法避免。

假设我们想从图像中识别出不同种类的椅子,然后将购买链接推荐给用户。一种可能的方法是先找出100种常见的椅子,为每种椅子拍摄1000张不同角度的图像,然后在收集到的图像数据集上训练一个分类模型。这个椅子数据集虽然可能比Fashion-MNIST数据集要庞大,但样本数仍然不及ImageNet数据集中样本数的十分之一。这可能会导致适用于ImageNet数据集的复杂模型在这个椅子数据集上过拟合。同时,因为数据量有限,最终训练得到的模型的精度也可能达不到实用的要求。

为了应对上述问题,一个显而易见的解决办法是收集更多的数据。然而,收集和标注数据会花费大量的时间和资金。例如,为了收集ImageNet数据集,研究人员花费了数百万美元的研究经费。虽然目前的数据采集成本已降低了不少,但其成本仍然不可忽略。

另外一种解决办法是应用迁移学习(transfer learning),将从源数据集学到的知识迁移到目标数据集上。 例如,虽然ImageNet数据集的图像大多跟椅子无关,但在该数据集上训练的模型可以抽取较通用的图像特征,从而能够帮助识别边缘、纹理、形状和物体组成等。 这些类似的特征对于识别椅子也可能同样有效。

迁移学习的一大应用场景是模型微调(finetune)。简单来说,就是我们先找到一个同类的别人训练好的模型,把别人现成的训练好了的模型拿过来,换成自己的数据,通过训练调整一下参数。 在PyTorch中提供了许多预训练好的网络模型(VGG,ResNet系列,mobilenet系列…),这些模型都是PyTorch官方在相应的大型数据集训练好的。学习如何进行模型微调,可以方便我们快速使用预训练模型完成自己的任务。

经过本节的学习,你将收获:

  • 掌握模型微调的流程
  • 了解PyTorch提供的常用model
  • 掌握如何指定训练模型的部分层

模型微调的流程

  • 1.在源数据集(如ImageNet数据集)上预训练一个神经网络模型,即源模型。
  • 2.创建一个新的神经网络模型,即目标模型。它复制了源模型上除了输出层外的所有模型设计及其参数。我们假设这些模型参数包含了源数据集上学习到的知识,且这些知识同样适用于目标数据集。我们还假设源模型的输出层跟源数据集的标签紧密相关,因此在目标模型中不予采用。
  • 3.为目标模型添加一个输出大小为已标数据集类别个数的输出层,并随机初始化该层的模型参数
  • 4.在目标数据集上训练目标模型。我们将从头训练输出层,而其余层的参数都是基于源模型的参数微调得到的。

在这里插入图片描述

使用已有模型结构

这里我们以torchvision中的常见模型为例,列出了如何在图像分类任务中使用PyTorch提供的常见模型结构和参数。对于其他任务和网络结构,使用方式是类似的:

实例化网络
传递pretrained参数
import torchvision.models as models


resnet18 = models.resnet18()
# resnet18 = models.resnet18(pretrained=False)  等价于与上面的表达式

''''
alexnet = models.alexnet()
vgg16 = models.vgg16()
squeezenet = models.squeezenet1_0()
densenet = models.densenet161()
inception = models.inception_v3()
googlenet = models.googlenet()
shufflenet = models.shufflenet_v2_x1_0()
mobilenet_v2 = models.mobilenet_v2()
mobilenet_v3_large = models.mobilenet_v3_large()
mobilenet_v3_small = models.mobilenet_v3_small()
resnext50_32x4d = models.resnext50_32x4d()
wide_resnet50_2 = models.wide_resnet50_2()
mnasnet = models.mnasnet1_0()
'''

通过True或者False来决定是否使用预训练好的权重

在默认状态下pretrained = False,意味着我们不使用预训练得到的权重,

pretrained = True,意味着我们将使用在一些数据集上预训练得到的权重。

import torchvision.models as models

resnet18 = models.resnet18(pretrained=True)
alexnet = models.alexnet(pretrained=True)
squeezenet = models.squeezenet1_0(pretrained=True)
vgg16 = models.vgg16(pretrained=True)
densenet = models.densenet161(pretrained=True)
inception = models.inception_v3(pretrained=True)
googlenet = models.googlenet(pretrained=True)
shufflenet = models.shufflenet_v2_x1_0(pretrained=True)
mobilenet_v2 = models.mobilenet_v2(pretrained=True)
mobilenet_v3_large = models.mobilenet_v3_large(pretrained=True)
mobilenet_v3_small = models.mobilenet_v3_small(pretrained=True)
resnext50_32x4d = models.resnext50_32x4d(pretrained=True)
wide_resnet50_2 = models.wide_resnet50_2(pretrained=True)
mnasnet = models.mnasnet1_0(pretrained=True)

注意事项:

  1. 通常PyTorch模型的扩展为.pt.pth,程序运行时会首先检查默认路径中是否有已经下载的模型权重,一旦权重被下载,下次加载就不需要下载了。

  2. 一般情况下预训练模型的下载会比较慢,我们可以直接通过迅雷或者其他方式去 pytorch/vision 查看自己的模型里面model_urls,然后手动下载,预训练模型的权重在LinuxMac的默认下载路径是用户根目录下的.cache文件夹。在Windows下就是C:\Users\<username>\.cache\torch\hub\checkpoint。可以通过使用 torch.utils.model_zoo.load_url()设置权重的下载地址。

  3. 如果觉得麻烦,还可以将自己的权重下载下来放到同文件夹下,然后再将参数加载网络。

    self.model = models.resnet50(pretrained=False)
    self.model.load_state_dict(torch.load('./model/resnet50-19c8e357.pth'))
    
  4. 如果中途强行停止下载的话,一定要去对应路径下将权重文件删除干净,要不然可能会报错。

训练特定层

在默认情况下,参数的属性.requires_grad = True,如果我们从头开始训练或微调不需要注意这里。但如果我们正在提取特征并且只想为新初始化的层计算梯度,其他参数不进行改变。那我们就需要通过设置requires_grad = False来冻结部分层。

在PyTorch官方中提供了这样一个例程。

def set_parameter_requires_grad(model, feature_extracting):
    if feature_extracting:
        for param in model.parameters():
            param.requires_grad = False

在下面我们仍旧使用resnet152为例的将1000类改为4类,但是仅改变最后一层的模型参数,不改变特征提取的模型参数;注意我们先冻结模型参数的梯度,再对模型输出部分的全连接层进行修改,这样修改后的全连接层的参数就是可计算梯度的。

import torch.nn as nn
import torchvision.models as models

# 冻结参数的梯度
feature_extract = True
model = models.resnet152(pretrained=True)

set_parameter_requires_grad(model, feature_extract)

# 修改模型
num_fits = model.fc.in_features
model.fc = nn.Linear(in_features=512, out_features=4, bias=True)
print(model.fc)
Linear(in_features=512, out_features=4, bias=True)

之后在训练过程中,model仍会进行梯度回传,但是参数更新则只会发生在fc层。通过设定参数的requires_grad属性,我们完成了指定训练模型的特定层的目标,这对实现模型微调非常重要。

参考


半精度训练

我们提到PyTorch时候,总会想到要用硬件设备GPU的支持,也就是“显卡”。

GPU的性能主要分为两部分:算力和显存,前者决定了显卡计算的速度,后者则决定了显卡可以同时放入多少数据用于计算。

在可以使用的显存数量一定的情况下,每次训练能够加载的数据更多(也就是batch size更大),则也可以提高训练效率。 另外,有时候数据本身也比较大(比如3D图像、视频等),显存较小的情况下可能甚至batch size为1的情况都无法实现。因此,合理使用显存也就显得十分重要。

我们观察PyTorch默认的浮点数存储方式用的是torch.float32, 单精度,小数点后位数更多固然能保证数据的精确性,但绝大多数场景其实并不需要这么精确,只保留一半的信息也不会影响结果,也就是使用torch.float16格式,半精度。由于数位减了一半,因此被称为“半精度”, 具体如下图:

在这里插入图片描述

分成3部分,符号位,指数和尾数。不同精度只不过是指数位和尾数位的长度不一样。

解析一个浮点数就5条规则

  • 如果指数位全零,尾数位是全零,那就表示0
  • 如果指数位全零,尾数位是非零,就表示一个很小的数(subnormal),计算方式 (−1)^signbit × 2^−126 × 0.fractionbits
  • 如果指数位全是1,尾数位是全零,表示正负无穷
  • 如果指数位全是1,尾数位是非零,表示不是一个数NAN
  • 剩下的计算方式为 (−1)^signbit × 2^(exponentbits−127) × 1.fractionbits

显然半精度能够减少显存占用,使得显卡可以同时加载更多数据进行计算

本节会介绍如何在PyTorch中设置使用半精度计算。

经过本节的学习,你将收获:

  • 如何在PyTorch中设置半精度训练
  • 使用半精度训练的注意事项

半精度训练设置

在PyTorch中使用autocast配置半精度训练,同时需要在下面三处加以设置:

版本要到1.6及以上支持

import autocast
from torch.cuda.amp import autocast
模型设置

在模型定义中,使用python的装饰器方法,用autocast装饰模型中的forward函数。关于装饰器的使用,可以参考这里

@autocast()
def forward(self, x):
    ...
    return x
训练过程

在训练过程中,只需在将数据输入模型及其之后的部分放入with autocast():即可:

for x in train_loader:
    x = x.cuda()
    with autocast():
        ouput = model(x)
        ...

注意

半精度训练主要适用于数据本身的size比较大(比如说3D图像、视频等)。

当数据本身的size并不大时(比如手写数字MNIST数据集的图片尺寸只有28*28),
使用半精度训练则可能不会带来显著的提升。

参考

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

uncle_ll

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值