1. AdaGrad
拟合四次函数,目标函数:
f ( x ) = 3.2 ∗ x 4 + 1.5 ∗ x 3 + 4.3 ∗ x 2 + 9.03 ∗ x − 15 f(x) = 3.2 * {x}^4 + 1.5 * {x}^3 + 4.3 * {x}^2 + 9.03 * {x} - 15 f(x)=3.2∗x4+1.5∗x3+4.3∗x2+9.03∗x−15
1.1 原理
原理如下图所示,摘自李宏毅老师上课ppt:
1.2 代码:
这里学习率选择的是alpha = 8.5,在这里好像学习率对结果影响不大,我甚至选择了100以及0.001,最后结果只跟迭代次数有关,迭代开始时下降很快,越到后面收敛越慢,这也是AdaGrad的缺点:
import numpy as np
import random
import matplotlib.pyplot as plt
def my_Func(params, x):
return params[0] * x ** 4 + params[1] * x ** 3 + params[2] * x ** 2 + params[1] * x - params[4]
def ge_Func():
num_data = 99
x = np.array(np.linspace(-15, 15, num_data)).reshape(num_data, 1) # 产生包含噪声的数据
mid, sigma = -1, 1
y = 3.2 * x ** 4 + 1.5 * x ** 3 + 4.3 * x ** 2 + 9.03 * x - 15 + np.random.normal(mid, sigma, num_data).reshape(
num_data, 1)
return x, y
def get_Gradient(x, y, y_):
for i in range(10):
bet_num = random.sample(range(1, len(x) - 1), 10)
gradient = np.array([0.0, 0.0, 0.0, 0.0, 0.0])
for i in bet_num:
gradient[0] += (x[i] ** 4) * (y[i] - y_[i])
gradient[1] += (x[i] ** 3) * (y[i] - y_[i])
gradient[2] += (x[i] ** 2) * (y[i] - y_[i])
gradient[3] += x[i] * (y[i] - y_[i])
gradient[4] += -1 * (y[i] - y_[i])
gradient = gradient / len(x)
return gradient
def gra_D():
x, y = ge_Func()
params = np.array([0.0, 0.0, 0.0, 0.0, 0.0])
y_ = my_Func(