今天学习了B站刘二大人pytorch实践课程第三讲
主要讲的是梯度下降算法,首先说的是梯度下降算法的数学模型
其次是代码实现:
import torch
import numpy as np
import matplotlib.pyplot as plt
x_data = [1,2,3]
y_data = [2,4,6]
def forward(x):
return w*x
def cost(xs,ys):
cost = 0
for x,y in zip(x_data,y_data):
y_pred = forward(x)
cost += (y_pred - y)**2
return cost/len(xs)
def gradient(xs,ys):
grad = 0
for x,y in zip(x_data,y_data):
grad += 2*x*(x*w-y)
return grad/len(xs)
w = 1
loss = []
print('Predict (before training)', 4, forward(4))
for epoch in range(1000):
cost_val = cost(x_data,y_data)
grad_val = gradient(x_data,y_data)
w -= 0.001*grad_val
loss.append(cost_val)
print('epoch = ',epoch ,'w =',w, 'cost= ',cost_val)
print('Predict (after training)', 4, forward(4))
plt.plot(loss,color=('cyan'))
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
运行后的可视化结果为:
但是在神经网络计算中一般使用梯度下降的一个衍生版本:随机梯度下降(SGD)
源代码为:
import torch
import numpy as np
import matplotlib.pyplot as plt
x_data = [1,2,3]
y_data = [2,4,6]
def forward(x):
return w*x
def loss(x,y):
y_grad = w*x
return (y_grad-y)**2
def gradient(x,y):
return 2*x*(x*w-y)
w = np.random.random()
loss_list = []
epoch_list = []
print('Predict (before training)', 4, forward(4))
for epoch in range(100):
for x,y in zip(x_data,y_data):
grad =gradient(x,y)
w -= 0.01*grad
loss_val = loss(x,y)
epoch_list.append(epoch)
loss_list.append(loss_val)
print('Predict (after training)', 4, forward(4))
plt.plot(epoch_list,loss_list,color=('blue'))
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
可视化结果为: