在学习Andrew Ng的课程的随机梯度下降算法时,想着去实现一下于是自定了一个一次函数来进行尝试:
使用python3生成随机数:
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
X = np.random.randint(1,10,5)
Y = 1 + 2*X
data = pd.DataFrame([X,Y])
data = data.T
data.columns = ['x', 'y']
data['intercept'] = 1
X = data[['x','intercept']]
y = data[['y']]
X = X.as_matrix()
y = y.as_matrix()
本来自己写了代码,总是出现误差无限扩大的情况,因此查找资料发现别人也是这么写的,参考:https://blog.csdn.net/kwame211/article/details/80364079完成后终于找到了误差无限大的原因,参数初始值的设定,对结果极其重要。
1.theta初始值
2.alpha学习率
theta = [0,1]
loss = 10
alpha = 0.001
eps = 0.1
max_iters = 1000
error = 0
iter_count = 0
error_list = []
while (loss > eps and iter_count < max_iters):
loss = 0
i = random.randint(0,4)
pred = theta[0]*X[i,0] + theta[1]*X[i][1]
theta[0] = theta[0] - alpha*(pred - y[i])*X[i][0]
theta[1] = theta[1] - alpha*(pred - y[i])*X[i][1]
for i in range(3):
pred = theta[0]*X[i][0] + theta[1]*X[i][1]
error = 0.5 * (pred - y[i])**2
loss = loss + error
error_list.append(loss)
iter_count += 1
print('iter_count:',iter_count)
print('theta:',theta)
print('final_loss:',loss)
print('iters:',iter_count)
得到结果如下:
theta: [array([1.92490034]), array([1.28644738])]
final_loss: [0.09858994]
iters: 123
1.theta初始值的影响:
2.alpha初始值的影响,对于theta=[1,1]
可以发现受alpha的影响巨大,当设置为0.05时就已经达不到设定的收敛条件了。当然最大迭代次数和设定的收敛条件eps同样也影响着结果。
那么梯度下降的学习率如何设定可以参照https://lumingdong.cn/setting-strategy-of-gradient-descent-learning-rate.html