随机梯度下降算法和梯度下降算法差异就是:
- 梯度下降算法会把每个w代入所有x后求和再求平均梯度,然后再更新w;梯度下降算法中w总共更新迭代次数次
- 随机梯度下降是每次代入x,w后都会算出一个梯度,然后马上更新w进入到下次的运算中;随机梯度下降算法中w总共更新迭代次数*样本总数次
import matplotlib.pyplot as plt
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]
def forward(w, x, y):
return 2 * x * (w * x - y)
def cost(w):
cost_sum = 0
for x_val, y_val in zip(x_data, y_data):
cost_sum -= x_val * w - y_val
return cost_sum
w_list = []
mse_list = []
w = -100
for i in range(30):
loss_sum = 0
for x_val, y_val in zip(x_data, y_data):
loss_data = 0.1 * forward(w, x_val, y_val)
cost_data = cost(w)
print("\tw=", w, "\tcost=", cost(w))
w_list.append(i + 1)
mse_list.append(cost_data / len(x_data))
# 随机梯度下降就是再每次求出梯度后马上更新
w -= loss_data
plt.plot(w_list, mse_list)
plt.ylabel("Loss")
plt.xlabel("w")
plt.show()