线性回归学习及实现

线性回归学习及实现

线性回归的原理

用一条直线来拟合数据样本,求得该直线的回归系数,这个过程就叫做回归,然后将回归系数带入直线回归方程,最后将待预测数据带入回归方程得到预测结果。

线性回归的优缺点

优点:结果易于理解,计算上不复杂。

缺点:对非线性的数据拟合不好。

适用数据类型:数值型和标称型数据。

线性回归算法分析

1.假设样本数据拟合一条直线
2.验证回归预测结果的准确度,需要用实际值(y)减去预测值( y ^ \hat y y^)的和求最小化
3.为便于求解最小值,演化成求 ∑ i = 1 m ( y i − y ˉ i ) 2 \sum_{i=1}^m (y^i - \bar y^i)^2 i=1m(yiyˉi)2的最小值
4.获得最小值对应的回归系数

简单线性回归-最小二乘法

假设我们找到了最佳拟合直线方程: y ^ = a x + b \hat y = ax + b y^=ax+b 根据每一个样本点 x i x^i xi,都对应有一个预测结果 y ^ i = a x i + b \hat y^i =ax^i + b y^i=axi+b,真实值为 y i y^i yi 目标找到a和b使得: ∑ i = 1 m ( y i − y ˉ i ) 2 \sum_{i=1}^m (y^i - \bar y^i)^2 i=1m(yiyˉi)2,即 ∑ i = 1 m ( y i − a x i − b ) 2 \sum_{i=1}^m (y^i - ax^i-b)^2 i=1m(yiaxib)2 尽可能的小。

目标函数(损失函数)为:

J ( a , b ) = ∑ i = 1 m ( y i − a x i − b ) 2 J(a,b) = \sum_{i=1}^m (y^i - ax^i-b)^2 J(a,b)=i=1m(yiaxib)2

要使得 J ( a , b ) J(a,b) J(a,b)最小,转化为求极值。其中未知参数a和b,那么分别对a和b求导。

∂ J ( a , b ) ∂ a = 0 \frac{\partial J(a,b)}{\partial a} = 0 aJ(a,b)=0 ∂ J ( a , b ) ∂ b = 0 \frac{\partial J(a,b)}{\partial b} = 0 bJ(a,b)=0

对b求导

∂ J ( a , b ) ∂ a = ∑ i = 1 m 2 ( y i − a x i − b ) ( − 1 ) = 0 \frac{\partial J(a,b)}{\partial a} = \sum_{i=1}^m 2(y^i - ax^i -b)(-1) = 0 aJ(a,b)=i=1m2(yiaxib)(1)=0

进一步推导得(两边除以2):

∑ i = 1 m ( y i − a x i − b ) ( − 1 ) = ∑ i = 1 m y i − a ∑ i = 1 n x i − m b = 0 \sum_{i=1}^m (y^i - ax^i -b)(-1) = \sum_{i=1}^m y^i -a\sum_{i=1}^n x^i - mb=0 i=1m(yiaxib)(1)=i=1myiai=1nximb=0

进一步推导,两边同时除以m:

∑ i = 1 m y i − a ∑ i = 1 n x i − m b = 0 ⇒ ∑ i = 1 m y i − a ∑ i = 1 n x i = m b ⇒ b = y ˉ − a x ˉ \sum_{i=1}^m y^i -a\sum_{i=1}^n x^i - mb=0 \Rightarrow \sum_{i=1}^m y^i -a\sum_{i=1}^n x^i =mb \Rightarrow b = \bar y -a \bar x i=1myiai=1nximb=0i=1myiai=1nxi=mbb=yˉaxˉ

对a求导

∂ J ( a , b ) ∂ a = ∑ i = 1 m 2 ( y i − a x i − b ) ( − x i ) = 0 ⇒ ∑ i = 1 m ( y i − a x i − b ) x i = 0 \frac{\partial J(a,b)}{\partial a} = \sum_{i=1}^m 2(y^i - ax^i -b)(-x^i) = 0 \Rightarrow \sum_{i=1}^m (y^i-ax^i-b)x^i = 0 aJ(a,b)=i=1m2(yiaxib)(xi)=0i=1m(yiaxib)xi=0

b = y ˉ − a x ˉ b = \bar y -a \bar x b=yˉaxˉ 带入:

∑ i = 1 m ( y i − a x i − y ˉ + a x ˉ ) x i = 0 ⇒ ∑ i = 1 m ( y i x i − a ( x i ) 2 − y ˉ x i + a x ˉ x i ) = 0 ⇒ ∑ i = 1 m ( y i x i − y ˉ x i ) − ∑ i = 1 m ( a ( x i ) 2 − a x ˉ x i ) = 0 ⇒ ∑ i = 1 m ( y i x i − y ˉ x i ) = a ∑ i = 1 m ( ( x i ) 2 − x ˉ x i ) \sum_{i=1}^m(y^i - ax^i -\bar y + a \bar x) x^i =0 \Rightarrow \sum_{i=1}^m(y^ix^i -a(x^i)^2-\bar y x^i +a\bar xx^i) =0 \Rightarrow \sum_{i=1}^m(y^ix^i-\bar yx^i) - \sum_{i=1}^m(a(x^i)^2-a\bar x x^i) = 0 \Rightarrow \sum_{i=1}^m(y^ix^i-\bar yx^i) = a\sum_{i=1}^m((x^i)^2-\bar x x^i) i=1m(yiaxiyˉ+axˉ)xi=0i=1m(yixia(xi)2yˉxi+axˉxi)=0i=1m(yixiyˉxi)i=1m(a(xi)2axˉxi)=0i=1m(yixiyˉxi)=ai=1m((xi)2xˉxi)

最后获得a的表达式:

a = ∑ i = 1 m ( y i x i − y ˉ x i ) ∑ i = 1 m ( ( x i ) 2 − x ˉ x i ) ⇒ ∑ i = 1 m ( y i x i − y ˉ x i − x ˉ y i + x ˉ y ˉ ) ∑ i = 1 m ( ( x i ) 2 − x ˉ x i − x ˉ x i + x ˉ 2 ) ⇒ ∑ i = 1 m ( x i − x ˉ ) ( y i − y ˉ ) ∑ i = 1 m ( x i − x ˉ ) 2 a = \frac {\sum_{i=1}^m(y^ix^i-\bar y x^i)}{\sum_{i=1}^m((x^i)^2-\bar x x^i) } \Rightarrow \frac {\sum_{i=1}^m(y^ix^i-\bar y x^i - \bar x y^i + \bar x \bar y)}{\sum_{i=1}^m((x^i)^2-\bar x x^i -\bar x x^i + \bar x^2) } \Rightarrow \frac{\sum_{i=1}^m(x^i-\bar x)(y^i-\bar y)}{\sum_{i=1}^m(x^i-\bar x)^2} a=i=1m((xi)2xˉxi)i=1m(yixiyˉxi)i=1m((xi)2xˉxixˉxi+xˉ2)i=1m(yixiyˉxixˉyi+xˉyˉ)i=1m(xixˉ)2i=1m(xixˉ)(yiyˉ)

将获得的a向量化处理(向量的点乘计算):

∑ i = 1 m W i ∗ V i ⇒ W ∙ V , 其 中 W = ( w 1 , w 2 , . . w m ) , V = ( v 1 , v 2 . . . v m ) \sum_{i=1}^m W^i * V^i \Rightarrow W \bullet V ,其中W =(w^1,w^2,..w^m) ,V = (v^1,v^2...v^m) i=1mWiViWVW=(w1,w2,..wm),V=(v1,v2...vm)

a = ∑ i = 1 m ( x i − x ˉ ) ( y i − y ˉ ) ∑ i = 1 m ( x i − x ˉ ) 2 ⇒ ( x i − x ˉ ) ∙ ( y i − y ˉ ) ( x i − x ˉ ) ∙ ( x i − x ˉ ) a = \frac{\sum_{i=1}^m(x^i-\bar x)(y^i-\bar y)}{\sum_{i=1}^m(x^i-\bar x)^2} \Rightarrow \frac{(x^i - \bar x) \bullet (y^i - \bar y)}{(x^i-\bar x)\bullet(x^i-\bar x)} a=i=1m(xixˉ)2i=1m(xixˉ)(yiyˉ)(xixˉ)(xixˉ)(xixˉ)(yiyˉ)

###简单线性回归代码实现

class LinearRegression():
    
    def __init__(self):
        '''初始化LinearRegression分类器'''
        self.ceof_ = None
        self.interp_ = None
        
    def fit(self,x_train,y_train):
        '''训练分类器,获取对应的回归系数和截距'''
        # 求X和y的均值
        x_mean = np.mean(x_train)
        y_mean = np.mean(y_train)
        
        # 分子num
        num = (x_train-x_mean).dot(y_train-y_mean)
        # 分母d
        d = (x_train-x_mean).dot(x_train-x_mean)
        self.ceof_ = num /d
        self.interp_ = y_mean - self.ceof_*x_mean
        
        return self
    
    def predict(self,x_test):
        
        y_predict = [self.ceof_ * x + self.interp_ for x in x_test]
        
        return y_predict
    
    def __repr__(self):
        return 'LinearRegression(vector)'
        

多元线性回归分析及代码实现

多元线性回归分析

假设我们的数据拟合直线y:

y = θ 0 + θ 1 x 1 + θ 2 X 2 . . . + θ n x n y = \theta_0 +\theta_1 x_1 +\theta_2 X_2 ... +\theta_n x_n y=θ0+θ1x1+θ2X2...+θnxn

注释: y ^ = θ 0 + θ 1 x 1 i + θ 2 X 2 i . . . + θ n x n i , 其 中 x i = ( x 1 i , x 2 i , x 3 i , . . . x n i ) , i 表 示 第 i 个 样 本 , n 表 示 样 本 x i 的 第 n 个 维 度 ; θ = ( θ 0 , θ 1 , θ 2 , . . . . . , θ n ) \hat y = \theta_0 +\theta_1 x_1^i +\theta_2 X_2^i ... +\theta_n x_n^i,其中 x^i = (x_1^i,x_2^i,x_3^i,...x_n^i),i表示第i个样本,n表示样本x^i的第n个维度; \theta = (\theta_0,\theta_1,\theta_2,.....,\theta_n) y^=θ0+θ1x1i+θ2X2i...+θnxnixi=(x1i,x2i,x3i,...xni),ii,nxinθ=(θ0,θ1,θ2,.....,θn)

那么只要得到一组 θ \theta θ值,既可以求得对应的新样本的预测值 y ^ \hat y y^

将$\hat y 推 到 成 推到成 \hat y = \theta_0 x_0^i +\theta_1 x_1^i +\theta_2 X_2^i … +\theta_n x_ni,其中x_0i=1$

那么第i个样本 x i x^i xi可以表示为 :

x i = ( x 0 i , x 1 i , x 2 i , . . . , x n i ) = ( 1 , x 1 i , x 2 i , . . . , x n i ) x^i = (x_0^i,x_1^i,x_2^i,...,x_n^i) = (1,x_1^i,x_2^i,...,x_n^i) xi=(x0i,x1i,x2i,...,xni)=(1,x1i,x2i,...,xni)

其中 θ \theta θ为一个列向量:

θ = ( θ 0 , θ 1 , θ 2 , . . . . . , θ n ) T \theta = (\theta_0,\theta_1,\theta_2,.....,\theta_n)^T θ=(θ0,θ1,θ2,.....,θn)T

最后每一个样本的预测值 y ^ i \hat y^i y^i:

y ^ i = x i ∙ θ \hat y^i = x_i \bullet \theta y^i=xiθ

样本集合set可以表示为:

X b = ( 1 x 1 1 x 2 1 ⋯ x n 1 1 x 1 2 x 2 2 ⋯ x n 2 ⋮ ⋮ ⋮ ⋮ 1 x 1 m x 2 m ⋯ x n m ) X_b = \begin {pmatrix} 1 & x_1^1 & x_2^1 & \cdots & x_n^1 \\ 1 & x_1^2 & x_2^2 & \cdots & x_n^2 \\ \vdots & \vdots& \vdots & & \vdots \\ 1 & x_1^m & x_2^m & \cdots & x_n^m \end {pmatrix} Xb=111x11x12x1mx21x22x2mxn1xn2xnm

那么 y ^ \hat y y^可以写成***矩阵运算***:

y ^ = X b ∙ θ \hat y = X_b \bullet \theta y^=Xbθ

目标函数(损失函数)尽可能的小:

J ( θ ) = ∑ i = 1 m ( y i − y ^ i ) 2 ⇒ ( y − X b ∙ θ ) T ∙ ( y − X b ∙ θ ) J(\theta) = \sum_{i=1}^m(y^i - \hat y^i)^2 \Rightarrow (y -Xb \bullet \theta)^T \bullet (y -Xb \bullet \theta) J(θ)=i=1m(yiy^i)2(yXbθ)T(yXbθ)

J ( θ ) 对 θ 求 导 为 0 , 得 到 θ J(\theta)对\theta 求导为0,得到\theta J(θ)θ0θ

θ = ( X b T ∙ X b ) − 1 ∙ X b T ∙ y \theta= (X_b^T \bullet X_b)^-1 \bullet X_b^T \bullet y θ=(XbTXb)1XbTy
问题:时间复杂度高 O ( n 3 ) O(n^3) O(n3)
注:-1 表示矩阵的逆, θ \theta θ是矩阵

多元线性回归(正规方程解)实现封装

import numpy as np
from sklearn.metrics import r2_score
class LinearRegression2():
    def __init__(self):
        '''初始化分类器'''
        self.coef_ = None
        self.interp_ = None
        self.theta_ = None
    
    def fit(self, X_train, y_train):
        '''训练分类器'''
        assert X_train.shape[0] == y_train.shape[0],'The size of X_train must be equal to y_train`s'
        
        X_b = np.hstack( [np.ones((len(X_train),1)), X_train] )
        
        self.theta_ = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
        self.coef_ = self.theta_[1:]
        self.interp_ = self.theta_[0]
        return self
        
    def predict(self,X_test):
        assert self.interp_ is not None and self.coef_ is not None, 'Must be fit before predict!'
        assert X_test.shape[1] == len(self.coef_), 'The feather`s number of X_test must be equal to the length of self.coef_'
        
        X_b =  np.hstack( [np.ones((len(X_test),1)), X_test] )
        y_predict = X_b.dot(self.theta_)
        
        return y_predict
        
    def score(self, X_test, y_test):
        y_predict = self.predict(X_test)
        return r2_score(y_test, y_predict)
    
    def __repr__(self):
        return 'LinearRegression(mat)'

多元线性回归实现(批量梯度/随机梯度/mini批量梯度下降)封装

⋆ \star 注意:使用梯度下降法求解极值,需要对数据特征进行归一化处理。

import numpy as np
from sklearn.metrics import r2_score

class LinearRegression6:

    def __init__(self):
        """初始化Linear Regression模型"""
        self.coef_ = None
        self.intercept_ = None
        self._theta = None

    def fit_normal(self, X_train, y_train):
        """根据训练数据集X_train, y_train训练Linear Regression模型"""
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must be equal to the size of y_train"

        X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
        self._theta = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y_train)

        self.intercept_ = self._theta[0]
        self.coef_ = self._theta[1:]

        return self

    def fit_gd(self, X_train, y_train, eta=0.01, n_iters=1e4):
        """根据训练数据集X_train, y_train, 使用梯度下降法训练Linear Regression模型"""
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must be equal to the size of y_train"

        def J(theta, X_b, y):
            try:
                return np.sum((y - X_b.dot(theta)) ** 2) / len(y)
            except:
                return float('inf')

        def dJ(theta, X_b, y):
            return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(X_b)

        def gradient_descent(X_b, y, initial_theta, eta, n_iters=1e4, epsilon=1e-8):

            theta = initial_theta
            cur_iter = 0

            while cur_iter < n_iters:
                gradient = dJ(theta, X_b, y)
                last_theta = theta
                theta = theta - eta * gradient
                if (abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon):
                    break

                cur_iter += 1

            return theta

        X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
        initial_theta = np.zeros(X_b.shape[1])
        self._theta = gradient_descent(X_b, y_train, initial_theta, eta, n_iters)

        self.intercept_ = self._theta[0]
        self.coef_ = self._theta[1:]

        return self

    def fit_sgd(self, X_train, y_train, n_iters=1e5, t0=5, t1=50):
        """根据训练数据集X_train, y_train, 使用梯度下降法训练Linear Regression模型"""
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must be equal to the size of y_train"
        assert n_iters >= 1

        def J(theta, X_b, y):
            try:
                return np.sum((y - X_b.dot(theta)) ** 2) / len(y)
            except:
                return float('inf')

        def dJ_sgd(theta, X_b_i, y_i):
            return X_b_i * (X_b_i.dot(theta) - y_i) * 2.

        def sgd(X_b, y, initial_theta, n_iters, t0=5, t1=50):

            def learning_rate(t):
                return t0 / (t + t1)

            theta = initial_theta
            m = len(X_b)

            for cur_iter in range(n_iters):
                indexes = np.random.permutation(m)
                X_b_new = X_b[indexes]
                y_new = y[indexes]
                for i in range(m):
                    gradient = dJ_sgd(theta, X_b_new[i], y_new[i])
                    last_theta = theta
                    theta = theta - learning_rate(cur_iter * m + i) * gradient
            return theta

        X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
        initial_theta = np.random.randn(X_b.shape[1])
        self._theta = sgd(X_b, y_train, initial_theta, n_iters, t0, t1)

        self.intercept_ = self._theta[0]
        self.coef_ = self._theta[1:]

        return self

    def fit_ssgd(self, X_train, y_train, n_iters=5, t0=5, t1=50,k=10):
        '''k表示每个子样本的长度'''
        def dJ_ssgd(theta, X_b_new, y_new):
            return X_b_new.T.dot(X_b_new.dot(theta) - y_new) * 2. / len(X_b_new)

        def ssgd(X_b_list, y_list, initial_theta, n_iters, t0=5, t1=50):

            def learning_rate(t):
                return t0 / (t + t1)

            theta = initial_theta
            m = len(X_b)

            for cur_iter in range(n_iters):
                    for i in range(int(m/k)):
                        gradient = dJ_ssgd(theta, X_b_list[i], y_list[i])
                        theta = theta - learning_rate(cur_iter * m + i) * gradient

            return theta

        def X_b_split(X_b, y):
            m = len(X_b)
            num = int(len(X_b)/k)
            indexes = np.random.permutation(m)
            X_b = X_b[indexes]
            y = y[indexes]
            X_b_list = []
            y_list = []
            for i in range(num):
                start = i * k
                stop = (i+1)*k
                X_b_list.append(X_b[start:stop])
                y_list.append(y[start:stop])
            return X_b_list, y_list

        X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
        X_b_list, y_list = X_b_split(X_b, y)
        initial_theta = np.random.randn(X_b.shape[1])
        self._theta = ssgd(X_b_list, y_list, initial_theta, n_iters, t0, t1)

        self.intercept_ = self._theta[0]
        self.coef_ = self._theta[1:]

        return self 


    def predict(self, X_predict):
        """给定待预测数据集X_predict,返回表示X_predict的结果向量"""
        assert self.intercept_ is not None and self.coef_ is not None, \
            "must fit before predict!"
        assert X_predict.shape[1] == len(self.coef_), \
            "the feature number of X_predict must be equal to X_train"

        X_b = np.hstack([np.ones((len(X_predict), 1)), X_predict])
        return X_b.dot(self._theta)

    def score(self, X_test, y_test):
        """根据测试数据集 X_test 和 y_test 确定当前模型的准确度"""

        y_predict = self.predict(X_test)
        return r2_score(y_test, y_predict)

    def __repr__(self):
        return "LinearRegression()"

本学习笔记参考:
《机器学习实战》和《Python3入门机器学习 经典算法与应用》

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值