【1】优化器介绍
(1)SGD
, (2)Momentum
, (3)RMSprop
, (4)Adam
SGD
是最普通的优化器, 也可以说没有加速效果, 而 Momentum
是 SGD
的改良版, 它加入了动量原则. 后面的 RMSprop
又是 Momentum
的升级版. 而 Adam
又是 RMSprop
的升级版. 不过从这个结果中我们看到, Adam
的效果似乎比 RMSprop
要差一点. 所以说并不是越先进的优化器, 结果越佳. 我们在自己的试验中可以尝试不同的优化器, 找到那个最适合你数据/网络的优化器.
【2】代码
# -*- coding: utf-8 -*-#
#-------------------------------------------------------------------------------
# Name: OptimizerYOUhuaqi
# Description:
# Author: Administrator
# Date: 2020/12/1
#-------------------------------------------------------------------------------
import torch
import torch.utils.data as Data
import torch.nn.functional as F
import matplotlib.pyplot as plt
torch.manual_seed(1)
LR=0.01
BATCH_SIZE=32
EPOCH=12
#FAKE DATASER
x=torch.unsqueeze(torch.linspace(-1,1,500),dim=1)
y=x.pow(2)+0.1*torch.normal(torch.zeros(*x.size()))
#plot dataset
plt.scatter(x.numpy(),y.numpy())
plt.show()
#load data
torch_dataset=Data.TensorDataset(x,y)
loader=Data.DataLoader(dataset=torch_dataset,batch_size=BATCH_SIZE,shuffle=True)
#network
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.hidden = torch.nn.Linear(1, 20) # hidden layer
self.predict = torch.nn.Linear(20, 1) # output layer
def forward(self,x):
x=F.relu(self.hidden(x))
x=self.predict(x)
return x
#每一个优化器创建一个Net
net_SGD =Net()
net_Momentum =Net()
net_Rmsprop =Net()
net_Adam =Net()
nets=[net_Adam,net_SGD,net_Momentum,net_Rmsprop]
#创建不同的优化器
opt_SGD =torch.optim.SGD(net_SGD.parameters(),lr=LR)
opt_Momentum=torch.optim.SGD(net_Momentum.parameters(),lr=LR,momentum=0.8)
opt_RMSprop =torch.optim.RMSprop(net_Rmsprop.parameters(),lr=LR,alpha=0.9)
opt_Adam =torch.optim.Adam(net_Adam.parameters(),lr=LR,betas=(0.9,0.99))
optimizers =[opt_Adam,opt_Momentum,opt_RMSprop,opt_SGD]
loss_func=torch.nn.MSELoss()
losses_his=[[],[],[],[]]
for epoch in range(EPOCH):
print('Epoch: ', epoch)
for step, (b_x, b_y) in enumerate(loader):
# 对每个优化器, 优化属于他的神经网络
for net, opt, l_his in zip(nets, optimizers, losses_his):
output = net(b_x) # get output for every net
loss = loss_func(output, b_y) # compute loss for every net
opt.zero_grad() # clear gradients for next train
loss.backward() # backpropagation, compute gradients
opt.step() # apply gradients
l_his.append(loss.data.numpy()) # loss recoder
labels = ['Adam', 'Momentum', 'RMSprop', 'SGD']
for i, l_his in enumerate(losses_his):
plt.plot(l_his, label=labels[i])
plt.legend(loc='best')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.ylim((0, 0.2))
plt.show()
【3】效果图