目录
梯度下降
公式
梯度下降算法(贪心),可以找到局部最优
(a为学习率)
很难找到全局最优!!会陷入到局部最优,但是在神经网络中,局部最优较少。
鞍点(马鞍):梯度为零。到达鞍点会没办法继续进行迭代。
非凸函数:存在局部最优点
则得到更新公式:
代码
import matplotlib.pyplot as plt
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]
w = 1.0 # 初始值
def forward(x): # y_hat
return x * w
def cost(xs, ys):
cost = 0
for x, y in zip(xs, ys):
y_pred = forward(x)
cost += (y_pred - y) ** 2
return cost / len(xs)
def gradient(xs, ys):
grad = 0
for x, y in zip(xs, ys):
grad += 2 * x * (x * w - y)
return grad / len(xs)
print('Predict (before training)', 4, forward(4))
a = 0.01 # 学习率
loss_list = []
epoch_list = []
for epoch in range(100):
cost_val = cost(x_data, y_data) # loss损失
grad_val = gradient(x_data, y_data) # 梯度
w -= a * grad_val
loss_list.append(cost_val)
epoch_list.append(epoch)
print('Epoch:', epoch, 'w=', w, 'Loss:', cost_val)
print('Predict (after training)', 4, forward(4))
plt.plot(epoch_list, loss_list)
plt.ylabel('Loss') # 纵坐标
plt.xlabel('Epoch') # 横坐标
plt.show()
随机梯度下降
公式
若有鞍点,则无法进行,但随机梯度每次只用一个样本,拥有随机噪声,则有可能跨越鞍点。
代码
import matplotlib.pyplot as plt
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]
w = 1.0 # 初始值
def forward(x): # y_hat
return x * w
def loss(x, y): # 求一个样本的loss
y_pred = forward(x)
return (y_pred - y) ** 2
def gradient(x, y):
return 2 * x * (x * w - y)
print('Predict (before training)', 4, forward(4))
a = 0.01 # 学习率
loss_list = []
epoch_list = []
for epoch in range(100):
for x, y in zip(x_data, y_data):
grad = gradient(x, y)
w = w - a * grad
print('\tgrad:', x, y, grad)
l = loss(x, y)
print('Epoch:', epoch, 'w=', w, 'Loss:', l)
loss_list.append(l)
epoch_list.append(epoch)
print('Predict (after training)', 4, forward(4))
plt.plot(epoch_list, loss_list)
plt.ylabel('Loss') # 纵坐标
plt.xlabel('Epoch') # 横坐标
plt.show()