梯度下降法

一、简介

  • 不是一个机器学习算法
  • 是一种基于搜索的最优化方法
  • 作用:最小化一个损失函数
  • 梯度上升法:最大化一个效用函数
    在这里插入图片描述
    在这里插入图片描述

在这里插入图片描述

  • 并不是所有函数都有唯一的极值点
  • 解决方案
    • 多次运行,随机化初始点
    • 梯度下降法的初始点也是一个超参数

二、梯度下降法实现

import numpy as np
import matplotlib.pyplot as plt
plot_x = np.linspace(-1,6,141)
plot_x

在这里插入图片描述

# 函数x和y
plot_y = (plot_x-2.5)**2-1 
plt.plot(plot_x,plot_y)
plt.show()

在这里插入图片描述

2.1 求出斜率【求导】

  • theta这一点的斜率
# 求出斜率
def dj(theta):
    return 2*(theta-2.5)

2.2 theta对应的y值

# 求出theta对应的y值
def j(theta):
    return (theta-2.5)**2-1

2.3 梯度下降法过程

theta = 0.0
epsilon = 1e-8
eta = 0.1
while True:
    # 求出theta点对应的斜率
    gradient = dj(theta)
    # 更新last_theta值
    last_theta = theta
    # 更新theta
    theta = theta -eta * gradient
    # 判断j(theta)的差值情况,如果小于epslion,则break
    if (abs(j(theta)-j(last_theta)) < epsilon):
        break 
print(theta) # 返回theta值
print(j(theta)) # 返回y值
  • 可能永远达不到你想要的结果
  • 设置epsilon,如果变化值小于epsilon,则达到极小值点
    在这里插入图片描述
  • 极小值即为对称轴的位置
theta = 0.0
# 新建theta_history列表,存储每次的theta值
theta_history = [theta]
epsilon = 1e-8
eta = 0.1
while True:
    gradient = dj(theta)
    last_theta = theta
    theta = theta -eta * gradient
    # append每次更新后的theta值
    theta_history.append(theta)
    # 如果j(theta)没啥大变化,则break循环
    if (abs(j(theta)-j(last_theta)) < epsilon):
        break

        
plt.plot(plot_x,j(plot_x))
plt.plot(np.array(theta_history),j(np.array(theta_history)),color = "r",marker="+")
plt.show()

在这里插入图片描述

  • 学习率eta = 0.01
eta = 0.01
theta_history = []
gradient_descent(0.,eta)
plot_theta_history()

在这里插入图片描述

  • eta = 0.8
eta = 0.8
theta_history = []
gradient_descent(0.,eta)
plot_theta_history()

在这里插入图片描述

三、封装函数

def gradient_descent(initial_theta,eta,n_iters = 1e4,epsilon=1e-8):
    theta = initial_theta
    theta_history.append(initial_theta)
    # 为了防止死循环,设置循环次数不超过1e4
    i_iter = 0
    
    while i_iter < n_iters:
        gradient = dj(theta)
        last_theta = theta
        theta = theta -eta*gradient
        theta_history.append(theta)
        
        if (abs(j(theta)-j(last_theta)) < epsilon):
            break
            
        i_iter += 1
            
            
def plot_theta_history():
    plt.plot(plot_x,j(plot_x))
    plt.plot(np.array(theta_history),j(np.array(theta_history)),color = "r",marker="+")
    plt.show()
eta = 1.1
theta_history = []
gradient_descent(0.,eta)
# plot_theta_history()

在这里插入图片描述

eta = 1.1
theta_history = []
gradient_descent(0.,eta,n_iters=10)
plot_theta_history()

在这里插入图片描述

四、多元线性回归中的梯度下降法

在这里插入图片描述

在这里插入图片描述

  • theta的求导
    在这里插入图片描述

在这里插入图片描述

import numpy as np
import matplotlib.pyplot as plt
np.random.seed(666)
x = 2*np.random.random(size=100)
# np.random.normal 生成 均值为0 方差为1 的正态分布
y = x*3. + 4. +np.random.normal(size=100)

在这里插入图片描述

plt.scatter(x,y)
plt.show()

在这里插入图片描述

4.1 定义函数

def j(theta,X_b,y):
    # 函数
    try:
        return np.sum((y-X_b.dot(theta))**2) / len(X_b)
    except:
        return float('inf')

4.2 定义δJ

def dj(theta,X_b,y):
    # 求导数梯度
    res = np.empty(len(theta))
    res[0] = np.sum(X_b.dot(theta) - y)
    for i in range(1,len(theta)):
        res[i] = (X_b.dot(theta) - y).dot(X_b[:,i])
        
    return res * 2/len(X_b) # 乘以2/m

4.3 定义梯度下降

def gradient_descent(X_b,y,initial_theta,eta,n_iters = 1e4,epsilon=1e-8):
    theta = initial_theta
    # 为了防止死循环,设置循环次数不超过1e4
    i_iter = 0
    
    while i_iter < n_iters:
        gradient = dj(theta,X_b,y)
        last_theta = theta
        theta = theta -eta*gradient
        
        if (abs(j(theta,X_b,y)-j(last_theta,X_b,y)) < epsilon):
            break
            
        i_iter += 1
    
    return theta
#这里加一列1
X_b = np.hstack([np.ones((len(x),1)), x.reshape(-1,1)])
#初始theta设置为0
initial_theta = np.zeros(X_b.shape[1])
eta = 0.01
 
theta = gradient_descent(X_b, y, initial_theta, eta)
theta

在这里插入图片描述

import numpy as np
from sklearn.metrics import r2_score

class LinearRegression:

    def __init__(self):
        """初始化Linear Regression模型"""
        self.coef_ = None
        self.intercept_ = None
        self._theta = None

    def fit_normal(self, X_train, y_train):
        """根据训练数据集X_train, y_train训练Linear Regression模型"""
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must be equal to the size of y_train"

        X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
        self._theta = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y_train)

        self.intercept_ = self._theta[0]
        self.coef_ = self._theta[1:]

        return self

    def fit_gd(self, X_train, y_train, eta=0.01, n_iters=1e4):
        """根据训练数据集X_train, y_train, 使用梯度下降法训练Linear Regression模型"""
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must be equal to the size of y_train"

        def J(theta, X_b, y):
            try:
                return np.sum((y - X_b.dot(theta)) ** 2) / len(y)
            except:
                return float('inf')

        def dJ(theta, X_b, y):
            # res = np.empty(len(theta))
            # res[0] = np.sum(X_b.dot(theta) - y)
            # for i in range(1, len(theta)):
            #     res[i] = (X_b.dot(theta) - y).dot(X_b[:, i])
            # return res * 2 / len(X_b)
            return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(X_b)

        def gradient_descent(X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8):

            theta = initial_theta
            cur_iter = 0

            while cur_iter < n_iters:
                gradient = dJ(theta, X_b, y)
                last_theta = theta
                theta = theta - eta * gradient
                if (abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon):
                    break

                cur_iter += 1

            return theta

        X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
        initial_theta = np.zeros(X_b.shape[1])
        self._theta = gradient_descent(X_b, y_train, initial_theta, eta, n_iters)

        self.intercept_ = self._theta[0]
        self.coef_ = self._theta[1:]

        return self

    def fit_sgd(self, X_train, y_train, n_iters=5, t0=5, t1=50):
        """根据训练数据集X_train, y_train, 使用梯度下降法训练Linear Regression模型"""
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must be equal to the size of y_train"
        assert n_iters >= 1

        def dJ_sgd(theta, X_b_i, y_i):
            return X_b_i * (X_b_i.dot(theta) - y_i) * 2.

        def sgd(X_b, y, initial_theta, n_iters, t0=5, t1=50):

            def learning_rate(t):
                return t0 / (t + t1)

            theta = initial_theta
            m = len(X_b)

            for cur_iter in range(n_iters):
                indexes = np.random.permutation(m)
                X_b_new = X_b[indexes]
                y_new = y[indexes]
                for i in range(m):
                    gradient = dJ_sgd(theta, X_b_new[i], y_new[i])
                    theta = theta - learning_rate(cur_iter * m + i) * gradient

            return theta

        X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
        initial_theta = np.random.randn(X_b.shape[1])
        self._theta = sgd(X_b, y_train, initial_theta, n_iters, t0, t1)

        self.intercept_ = self._theta[0]
        self.coef_ = self._theta[1:]

        return self

    def predict(self, X_predict):
        """给定待预测数据集X_predict,返回表示X_predict的结果向量"""
        assert self.intercept_ is not None and self.coef_ is not None, \
            "must fit before predict!"
        assert X_predict.shape[1] == len(self.coef_), \
            "the feature number of X_predict must be equal to X_train"

        X_b = np.hstack([np.ones((len(X_predict), 1)), X_predict])
        return X_b.dot(self._theta)

    def score(self, X_test, y_test):
        """根据测试数据集 X_test 和 y_test 确定当前模型的准确度"""

        y_predict = self.predict(X_test)
        return r2_score(y_test, y_predict)

    def __repr__(self):
        return "LinearRegression()"

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

        def dJ(theta, X_b, y):
            # res = np.empty(len(theta))
            # res[0] = np.sum(X_b.dot(theta) - y)
            # for i in range(1, len(theta)):
            #     res[i] = (X_b.dot(theta) - y).dot(X_b[:, i])
            # return res * 2 / len(X_b)
            return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(X_b)

4.4 使用梯度下降法

import numpy as np
from sklearn import datasets
boston = datasets.load_boston()
X = boston.data
y = boston.target

X = X[y<50.0]
y = y[y<50.0]

from sklearn.model_selection import train_test_split

X_train,X_test,y_train,y_test = train_test_split(X,y,random_state=666)
lin_reg1 = LinearRegression()
%time lin_reg1.fit_normal(X_train,y_train)
lin_reg1.score(X_test,y_test)

在这里插入图片描述

lin_reg2 = LinearRegression()
lin_reg2.fit_gd(X_train,y_train)

在这里插入图片描述

lin_reg2.fit_gd(X_train,y_train,eta = 0.000001)

在这里插入图片描述

lin_reg2.score(X_test,y_test)
%time lin_reg2.fit_gd(X_train,y_train,eta = 0.000001,n_iters =1e6 )
lin_reg2.score(X_test,y_test)

在这里插入图片描述

  • 以上数据差异性太大造成,学习率为0.000001才能计算训练,计算成本太大
    在这里插入图片描述
from sklearn.preprocessing import StandardScaler
standardscaler = StandardScaler()
standardscaler.fit(X_train)

在这里插入图片描述

X_train_standard = standardscaler.transform(X_train)
lin_reg3 = LinearRegression()
%time lin_reg3.fit_gd(X_train_standard,y_train)

在这里插入图片描述

X_test_standard = standardscaler.transform(X_test)
lin_reg3.score(X_test_standard,y_test)

在这里插入图片描述

4.5 梯度下降法的优势

m = 1000
n = 5000
big_X = np.random.normal(size = (m,n))
true_theta = np.random.uniform(0.0,100.0,size=n+1)

# np,random.normal 均值为0 标准差为10
big_Y = big_X.dot(true_theta[1:]) + true_theta[0] + np.random.normal(0.,10.,size=m)
big_reg1 = LinearRegression()
%time big_reg1.fit_normal(big_X,big_Y)

在这里插入图片描述

big_reg2 = LinearRegression()
%time big_reg2.fit_gd(big_X,big_Y)

在这里插入图片描述

五、随机梯度下降法(sgd)

5.1 代码实现

#这里每次求一行数据的梯度,所以后面不用除以m
def dJ_sgd(theta, X_b_i, y_i):
    return X_b_i.T.dot(X_b_i.dot(theta) - y_i)* 2. 
 
 

# 随机梯度下降法学习率设置t0/(t+t1)这种形式
#由于梯度下降法随机性,设置最后的结果的时候只设置最大迭代次数
def sgd(X_b, y, initial_theta, n_iters):
    
    t0 = 5
    t1 = 50
    
    def learning_rate(t):
        return t0/(t+t1)
    
    theta = initial_theta
    for cur_iter in range(n_iters):
        #下面是设置每次随机取一个样本
        rand_i = np.random.randint(len(X_b))
        gradient = dJ_sgd(theta, X_b[rand_i], y[rand_i])
        theta = theta - learning_rate(cur_iter) * gradient
        
    return theta
m = 100000
 
x = np.random.normal(size = m)
X = x.reshape(-1,1)
y = 4. * x + 3. +np.random.normal(0,3,size = m)

%time
X_b = np.hstack([np.ones((len(x),1)), X])
initial_theta = np.zeros(X_b.shape[1])
 
theta = sgd(X_b, y, initial_theta, n_iters=len(X_b)//3)

在这里插入图片描述

theta

在这里插入图片描述
-theta接近我们设置的w和b值

5.2 随机梯度下降法解释

  • 每次随机对其中的一个进行求导,搜索方向
    在这里插入图片描述

在这里插入图片描述

  • 随机梯度下降法的theta值 逐渐递减的
    在这里插入图片描述

  • 模拟退火的思想
    在这里插入图片描述

5.3 scikit-leran中的随机梯度下降法

5.3.1 封装函数

import numpy as np
from sklearn.metrics import r2_score

class LinearRegression:

    def __init__(self):
        """初始化Linear Regression模型"""
        self.coef_ = None
        self.intercept_ = None
        self._theta = None

    def fit_normal(self, X_train, y_train):
        """根据训练数据集X_train, y_train训练Linear Regression模型"""
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must be equal to the size of y_train"

        X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
        self._theta = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y_train)

        self.intercept_ = self._theta[0]
        self.coef_ = self._theta[1:]

        return self

    def fit_gd(self, X_train, y_train, eta=0.01, n_iters=1e4):
        """根据训练数据集X_train, y_train, 使用梯度下降法训练Linear Regression模型"""
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must be equal to the size of y_train"

        def J(theta, X_b, y):
            try:
                return np.sum((y - X_b.dot(theta)) ** 2) / len(y)
            except:
                return float('inf')

        def dJ(theta, X_b, y):
            # res = np.empty(len(theta))
            # res[0] = np.sum(X_b.dot(theta) - y)
            # for i in range(1, len(theta)):
            #     res[i] = (X_b.dot(theta) - y).dot(X_b[:, i])
            # return res * 2 / len(X_b)
            return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(X_b)

        def gradient_descent(X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8):

            theta = initial_theta
            cur_iter = 0

            while cur_iter < n_iters:
                gradient = dJ(theta, X_b, y)
                last_theta = theta
                theta = theta - eta * gradient
                if (abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon):
                    break

                cur_iter += 1

            return theta

        X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
        initial_theta = np.zeros(X_b.shape[1])
        self._theta = gradient_descent(X_b, y_train, initial_theta, eta, n_iters)

        self.intercept_ = self._theta[0]
        self.coef_ = self._theta[1:]

        return self

    def fit_sgd(self, X_train, y_train, n_iters=5, t0=5, t1=50):
        """根据训练数据集X_train, y_train, 使用梯度下降法训练Linear Regression模型"""
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must be equal to the size of y_train"
        assert n_iters >= 1

        def dJ_sgd(theta, X_b_i, y_i):
            return X_b_i * (X_b_i.dot(theta) - y_i) * 2.

        def sgd(X_b, y, initial_theta, n_iters, t0=5, t1=50):

            def learning_rate(t):
                return t0 / (t + t1)

            theta = initial_theta
            m = len(X_b)

            for cur_iter in range(n_iters):
                # 对[0,m-1]的索引 取一个随机排列,np.random.permutation(m)
                indexes = np.random.permutation(m)
                # 根据产生的新索引
                X_b_new = X_b[indexes]
                y_new = y[indexes]
                for i in range(m):
                    gradient = dJ_sgd(theta, X_b_new[i], y_new[i])
                    theta = theta - learning_rate(cur_iter * m + i) * gradient

            return theta

        X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
        initial_theta = np.random.randn(X_b.shape[1])
        self._theta = sgd(X_b, y_train, initial_theta, n_iters, t0, t1)

        self.intercept_ = self._theta[0]
        self.coef_ = self._theta[1:]

        return self

    def predict(self, X_predict):
        """给定待预测数据集X_predict,返回表示X_predict的结果向量"""
        assert self.intercept_ is not None and self.coef_ is not None, \
            "must fit before predict!"
        assert X_predict.shape[1] == len(self.coef_), \
            "the feature number of X_predict must be equal to X_train"

        X_b = np.hstack([np.ones((len(X_predict), 1)), X_predict])
        return X_b.dot(self._theta)

    def score(self, X_test, y_test):
        """根据测试数据集 X_test 和 y_test 确定当前模型的准确度"""

        y_predict = self.predict(X_test)
        return r2_score(y_test, y_predict)

    def __repr__(self):
        return "LinearRegression()"
import numpy as np
import matplotlib.pyplot as plt

m = 100000
x = np.random.normal(size=m)
X = x.reshape(-1,1)
y = 4.*x + 3. + np.random.normal(0,3,size=m)

lin_reg_1 = LinearRegression()
lin_reg_1.fit_sgd(X,y,n_iters = 2)

在这里插入图片描述

5.3.2 处理数据

from sklearn import datasets

boston = datasets.load_boston()

X = boston.data
y= boston.target

X = X[y<50.0]
y = y[y<50.0]
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state=666)
from sklearn.preprocessing import StandardScaler
standardscaler = StandardScaler()
standardscaler.fit(X_train)
X_train_standard = standardscaler.transform(X_train)
X_test_standard = standardscaler.transform(X_test)
lin_reg_2 = LinearRegression()
%time
lin_reg_2.fit_sgd(X_train_standard,y_train,n_iters=2)
lin_reg_2.score(X_test_standard,y_test)

在这里插入图片描述

lin_reg_2 = LinearRegression()
%time
lin_reg_2.fit_sgd(X_train_standard,y_train,n_iters=100)
lin_reg_2.score(X_test_standard,y_test)

在这里插入图片描述

5.4 scikit-learn中的sgd

  • 随机梯度下降法只能在线性模型中使用
  • from sklearn.linear_model import SGDRegressor
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor()
%time sgd_reg.fit(X_train_standard,y_train)
sgd_reg.score(X_test_standard,y_test)

在这里插入图片描述

sgd_reg = SGDRegressor(n_iter=100)
%time sgd_reg.fit(X_train_standard,y_train)
sgd_reg.score(X_test_standard,y_test)

在这里插入图片描述

六、梯度下降法的调试

在这里插入图片描述

  • 红点的切线斜率和两个蓝点的连线斜率大抵是相等的

  • 间距越小,误差越小
    在这里插入图片描述

  • 可推广得
    在这里插入图片描述
    在这里插入图片描述

  • 但是有一个问题,时间复杂度很高

6.1 代码实现

import numpy as np
import matplotlib.pyplot as plt
np.random.seed(666)
X = np.random.random(size=(1000,10))
true_theta = np.arange(1,12,dtype=float)
X_b = np.hstack([np.ones((len(X),1)),X])
y = X_b.dot(true_theta) + np.random.normal(size=1000)
X.shape

在这里插入图片描述

def j(theta,X_b,y):
    # loss funciton
    try:
        return np.sum((y-X_b.dot(theta))**2) /len(X_b)
    except:
        return float('inf')
def dj_math(theta,X_b,y):
    # 计算在theta点的梯度
    return X_b.T.dot(X_b.dot(theta)-y)*2./len(y)
def dj_debug(theta,X_b,y,epsilon=0.01):
    res = np.empty(len(theta))
    for i in range(len(theta)):
        theta1 = theta.copy()
        theta1[i] += epsilon
        theta2 = theta.copy()
        theta2[i] -= epsilon
        res[i] = (j(theta1,X_b,y) - j(theta2,X_b,y))/(2*epsilon)
    return res
def gradient_descent(dj,X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8):

    theta = initial_theta
    cur_iter = 0

    while cur_iter < n_iters:
        gradient = dj(theta, X_b, y)
        last_theta = theta
        theta = theta - eta * gradient
        if (abs(j(theta, X_b, y) - j(last_theta, X_b, y)) < epsilon):
            break

        cur_iter += 1

    return theta
X_b = np.hstack([np.ones((len(X),1)),X])
initial_theta = np.zeros(X_b.shape[1])
eta = 0.01

%time gradient_descent(dj_debug,X_b,y,initial_theta,eta)
theta

在这里插入图片描述

%time theta = gradient_descent(dj_math,X_b,y,initial_theta,eta)
theta

在这里插入图片描述

  • 为了求每一个Θ的值

七、总结

在这里插入图片描述

  • 随机梯度下降法
    • 跳出局部最优解
    • 更快的运行速度
    • 机器学习领域很多算法都会使用随机的特点:
      • 随机搜索;随机森林
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值