optimizer.zero_grad()意思是把梯度置零,也就是把loss关于weight的导数变成0.
在学习pytorch的时候注意到,对于每个batch大都执行了这样的操作:
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
对于这些操作我是把它理解成一种梯度下降法,贴一个自己之前手写的简单梯度下降法作为对照:
# gradient descent
weights = [0] * n
alpha = 0.0001
max_Iter = 50000
for i in range(max_Iter):
loss = 0
d_weights = [0] * n
for k in range(m):
h = dot(input[k], weights)
d_weights = [d_weights[j] + (label[k] - h) * input[k][j] for j in range(n)]
loss += (label[k] - h) * (label[k] - h) / 2
d_weights = [d_weights[k]/m