- 背景:在选择最优的函数时,我们的目标是让损失函数最小化,比如最小二乘法,但有些模型的损失函数非常复杂,无法得到参数估计值的表达式。因此,我们需要一种更普遍适用的方法求解最优函数——“梯度下降法”。
- 宗旨:从损失值出发,去更新参数,且要大幅降低计算次数。通过导数告诉我们此时此刻某参数应该朝什么方向,以怎样的速度运动,能安全高效降低损失值,朝最小损失值靠拢。
- 概念:梯度是向量,是多元函数的导数,指向误差值增加最快的方向。我们沿着梯度的反方向进行线性搜索,从而减少误差值,是为梯度下降。
导数的实现:
def gradient_descent(initial_theta, eta, n_iters, epsilon=1e-6):
theta = initial_theta
theta_history.append(theta)
i_iters = 0
while i_iters < n_iters:
gradient = dLF(theta)
last_theta = theta
theta = theta - eta * gradient
theta_history.append(theta)
if(abs(lossFunction(theta) - lossFunction(last_theta)) < epsilon):
break
i_iters += 1
- 线性回归中的梯度下降代码实现
def fit_gd(self, X_train, y_train, eta=0.01, n_iters=1e4): """根据训练数据集X_train, y_train, 使用梯度下降法训练Linear Regression模型""" assert X_train.shape[0] == y_train.shape[0], \ "the size of X_train must be equal to the size of y_train" def J(theta, X_b, y): try: return np.sum((y - X_b.dot(theta)) ** 2) / len(y) except: return float('inf') def dJ(theta, X_b, y): return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(y) def gradient_descent(X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8): theta = initial_theta cur_iter = 0 while cur_iter < n_iters: gradient = dJ(theta, X_b, y) last_theta = theta theta = theta - eta * gradient if (abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon): break cur_iter += 1 return theta X_b = np.hstack([np.ones((len(X_train), 1)), X_train]) initial_theta = np.zeros(X_b.shape[1]) self._theta = gradient_descent(X_b, y_train, initial_theta, eta, n_iters) self.intercept_ = self._theta[0] self.coef_ = self._theta[1:] return self
- 数据归一化
from sklearn.preprocessing import StandardScaler standardScaler = StandardScaler() standardScaler.fit(X_train) X_train_std = standardScaler.transform(X_train) lin_reg3 = LinearRegression() lin_reg3.fit_gd(X_train_std, y_train) X_test_std = standardScaler.transform(X_test) lin_reg2.score(X_test, y_test) """