黑马程序员——三天快速入门python机器学习(五)

本文详细介绍了线性回归和梯度下降的原理,以及它们在波士顿房价预测中的实现。通过比较正规方程与梯度下降的优化方法,展示了模型的预测效果。同时,探讨了欠拟合和过拟合的概念,提出岭回归作为线性回归的改进方法,以解决过拟合问题。最后,通过实验证明了岭回归在波士顿房价预测中的优势。
摘要由CSDN通过智能技术生成

线性回归

线性回归原理

  1. 应用场景:(目标值为连续型)
  • 房价预测
  • 销售额度预测
  1. 定义
    利用回归方程对一个或多个自变量(特征值)和因变量(目标值)之间关系进行建模的一种分析方式。只有一个自变量的情况叫做单变量回归,不止一个变量的叫多元回归。
  2. 公式
    在这里插入图片描述
  3. 两种线性模型
  • 线性关系(自变量是一次的)
    1)单特征与目标值的关系呈直线关系
    在这里插入图片描述
    2)两个特征与目标呈平面关系
    在这里插入图片描述
    3)还有更高维度的。(记住这种关系即可)

  • 非线性关系(参数一次)
    在这里插入图片描述

线性回归的损失与优化原理

  1. 损失函数
    在这里插入图片描述
  • 我们可以通过一些优化方法减少总损失。
  1. 优化算法
  • 正规方程:直接求解到最小值。
    在这里插入图片描述

  • 梯度下降
    在这里插入图片描述
    在这里插入图片描述

  • 正规方程和梯度下降对比

梯度下降正规方程
需要选择学习率不需要
迭代求解一次运算得出
特征数量较大可以用需要计算方程,时间复杂度高

线性回归API

  1. 正规方程
    sklearn.linear_model.LinearRegression(fit_intercept=True)
  • fit_intercept:是否计算偏置
  • LinearRegression.coef_:回归系数
  • LinearRegression.intercept_:偏置
  1. 梯度下降
    sklearn.linear_model.SGDRegressor(loss=“squared_loss”, fit_intercept=True, learning_rate=‘invscaling’, eta0=0.01)
  • SGDRegressor类实现了随机梯度下降学习,它支持不同的loss函数和正则化惩罚项来拟合线性回归模型

  • loss:损失类型
    loss=“squared_loss”:普通最小二乘法

  • fit_intercept:是否计算偏置

  • learning_rate:string,optional
    学习率填充
    ‘optimal’:eta=1.0 / (alpha*(t+t0)) [default]
    ‘invscaling’:eta=eta0 / pow(t, power_t)
    ‘constant’:eta=eta0 (常数值学习率)

  • SGDRegression.coef_:回归系数

  • SGDRegression.intercept_:偏置

波斯顿房价预测

  1. 数据介绍
    在这里插入图片描述
    在这里插入图片描述

  2. 流程分析
    1)获取数据集
    2)划分数据集
    3)特征工程:无量纲化 - 标准化
    4)预估器流程:fit() - -> 模型:coef_ 和 intercept_
    5)模型评估

  3. 回归性能评估

  • 均方误差(Mean Squared Error)(MSE)评价机制
    原理:在这里插入图片描述
    那个均方误差小那个预测效果就好。
  • 均方误差回归损失:
    sklearn.metrics.mean_squared_error(y_ture, y_pred)
    y_true:真实值
    y_pred:预测值
    return:浮点数结果
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression, SGDRegressor
from sklearn.metrics import mean_squared_error


def linner1():
    """
    正规方程的优化方法对波斯顿房价进行预测
    :return:
    """
    # 1)获取数据
    boston = load_boston()

    # 2)划分数据集
    x_train, x_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=22)

    # 3)标准化
    transfer = StandardScaler()
    x_train = transfer.fit_transform(x_train)
    x_test = transfer.transform(x_test)

    # 4)预估器
    estimator = LinearRegression()
    estimator.fit(x_train, y_train)

    # 5)得出模型
    print("正规方程的权重系数为:\n", estimator.coef_)
    print("正规方程的偏置为:\n", estimator.intercept_)

    # 6)模型评估
    y_predict = estimator.predict(x_test)
    print("预测房价:\n", y_predict)
    error = mean_squared_error(y_test, y_predict)
    print("正规方程-均分误差为:\n", error)

    return None


def linner2():
    """
    梯度下降的优化方法
    :return:
    """
    # 1)获取数据
    boston = load_boston()
    print("特征数量:\n", boston.data.shape)  #特征有几个权重系数就

    # 2)划分数据集
    x_train, x_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=22)

    # 3)标准化
    transfer = StandardScaler()
    x_train = transfer.fit_transform(x_train)
    x_test = transfer.transform(x_test)

    # 4)预估器
    estimator = SGDRegressor(learning_rate="constant", eta0=0.001, max_iter=10000) #(调参)可以自己设定的几个参数,比如学习率,迭代次数
    estimator.fit(x_train, y_train)

    # 5)得出模型
    print("梯度下降权重系数为:\n", estimator.coef_)
    print("梯度下降偏置为:\n", estimator.intercept_)

    # 6)模型评估
    y_predict = estimator.predict(x_test)
    print("预测房价:\n", y_predict)
    error = mean_squared_error(y_test, y_predict)
    print("梯度下降-均分误差为:\n", error)

    return None


if __name__ == '__main__':
    linner1()
    linner2()

运行结果

正规方程的权重系数为:
 [-0.64817766  1.14673408 -0.05949444  0.74216553 -1.95515269  2.70902585
 -0.07737374 -3.29889391  2.50267196 -1.85679269 -1.75044624  0.87341624
 -3.91336869]
正规方程的偏置为:
 22.62137203166228
预测房价:
 [28.22944896 31.5122308  21.11612841 32.6663189  20.0023467  19.07315705
 21.09772798 19.61400153 19.61907059 32.87611987 20.97911561 27.52898011
 15.54701758 19.78630176 36.88641203 18.81202132  9.35912225 18.49452615
 30.66499315 24.30184448 19.08220837 34.11391208 29.81386585 17.51775647
 34.91026707 26.54967053 34.71035391 27.4268996  19.09095832 14.92742976
 30.86877936 15.88271775 37.17548808  7.72101675 16.24074861 17.19211608
  7.42140081 20.0098852  40.58481466 28.93190595 25.25404307 17.74970308
 38.76446932  6.87996052 21.80450956 25.29110265 20.427491   20.4698034
 17.25330064 26.12442519  8.48268143 27.50871869 30.58284841 16.56039764
  9.38919181 35.54434377 32.29801978 21.81298945 17.60263689 22.0804256
 23.49262401 24.10617033 20.1346492  38.5268066  24.58319594 19.78072415
 13.93429891  6.75507808 42.03759064 21.9215625  16.91352899 22.58327744
 40.76440704 21.3998946  36.89912238 27.19273661 20.97945544 20.37925063
 25.3536439  22.18729123 31.13342301 20.39451125 23.99224334 31.54729547
 26.74581308 20.90199941 29.08225233 21.98331503 26.29101202 20.17329401
 25.49225305 24.09171045 19.90739221 16.35154974 15.25184758 18.40766132
 24.83797801 16.61703662 20.89470344 26.70854061 20.7591883  17.88403312
 24.28656105 23.37651493 21.64202047 36.81476219 15.86570054 21.42338732
 32.81366203 33.74086414 20.61688336 26.88191023 22.65739323 17.35731771
 21.67699248 21.65034728 27.66728556 25.04691687 23.73976625 14.6649641
 15.17700342  3.81620663 29.18194848 20.68544417 22.32934783 28.01568563
 28.58237108]
正规方程-均分误差为:
 20.627513763095408
特征数量:
 (506, 13)
梯度下降权重系数为:
 [-0.46206399  0.84713026 -0.47875122  0.83036788 -1.52055831  2.87446469
 -0.17521443 -2.97520071  1.4709828  -0.75574941 -1.69495232  0.88158857
 -3.86770122]
梯度下降偏置为:
 [22.64760224]
预测房价:
 [28.31879886 31.56413807 21.52726312 32.7670082  20.3396125  19.24173607
 21.40515665 19.48207626 19.76011323 32.77735217 21.39309717 27.24344861
 15.63885441 20.04451852 37.06452776 18.57045119  9.9595429  18.68674146
 30.82478789 24.2796993  19.19449712 34.08345904 29.56378238 17.48420584
 34.73215974 26.37145183 34.16508127 27.38034981 19.26117323 16.01557632
 30.82728471 14.24023838 37.47775241  9.28490074 16.48280367 16.78167142
  8.05184352 19.82144873 40.57976556 29.2890257  25.27863951 18.00569158
 39.69901265  6.78935895 21.55553366 24.93770541 21.26228886 20.75459706
 17.05900301 26.49871406  9.89083496 27.05549062 30.75459034 16.90051099
  9.75721989 35.44590673 31.218767   23.16820307 17.62483946 21.9076084
 23.58849117 23.84273534 20.46256188 38.06233155 25.92680469 19.76199429
 14.3491459   6.84168817 42.70973028 21.80708401 16.64504055 22.72022166
 40.98586786 21.79546088 36.90787845 27.10219436 22.10599923 20.70386568
 25.34551093 24.17293697 31.52286673 20.16583151 24.10149121 31.40725452
 27.30015283 21.0082161  28.97021037 22.12378301 26.7810135  18.54334303
 25.01988629 24.01051511 20.08928232 18.50830978 15.65357369 18.36144387
 24.44491506 16.86697175 20.67646615 26.80015495 20.80864857 18.19182048
 24.1053657  23.18345733 20.07450442 36.50906194 16.09943858 22.64478424
 32.59564983 33.78187312 20.55698269 25.70424168 23.7725811  17.84028696
 21.49143562 21.80535244 27.49742328 25.30872574 23.61349363 14.49905948
 15.91023595  3.75518602 29.16506193 20.80557613 22.26095734 28.01577887
 28.26397516]
梯度下降-均分误差为:
 21.549673158012922

欠拟合与过拟合

什么是欠拟合与过拟合

  1. 理解
  • 欠拟合:
    在这里插入图片描述
  • 过拟合:
    在这里插入图片描述
  1. 定义
  • 欠拟合:一个假设在训练数据上不能获得更好的拟合,并且在测试数据集上也不能很好的拟合数据,此时认为这个假设出现了欠拟合现象(模型过于简单)。
  • 过拟合:一个假设在训练数据上能够获得比其他假设更好的拟合,但是在测试数据集上却不能很好的拟合数据,此时认为这个假设出现了过拟合现象(模型过于复杂)。
    在这里插入图片描述

原因及解决方法

  1. 欠拟合
  • 原因:学习到数据的特征过少。
  • 解决方法:增加数据特征数量。
  1. 过拟合
  • 原因:原始特征过多,存在一些嘈杂特征,模型过于复杂是因为模型尝试去兼顾各个数据点。
  • 解决方法:正则化
    在这里插入图片描述
    在这里插入图片描述

正则化类别

  • L2正则化(更常用)
    ° 作用:可以使得其中一些W都很小,都接近于0,削弱某个特征的影响
    ° 优点:越小的参数说明模型越简单,越简单的模型也不容易产生过拟合现象
    ° Ridge回归 (岭回归)
    ° 加入L2正则化后的损失函数:损失函数+λ惩罚项
    在这里插入图片描述

  • L1正则化:
    ° 作用:可以使得其中一些W的值直接为0,删除这个特征的影响
    ° LASSO回归

线性回归的改进——岭回归

带有L2正则化的线性回归–岭回归

岭回归,其实也是一种线性回归。只不过在算法建立回归方程时候,加上正则化的限制,从而达到解决过拟合的效果

API

sklearn.linear_model.Ridge(alpha=1.0, fit_intercept=True, solver=‘auto’, normalize=False)

  • alpha:正则化力度(惩罚项系数λ),λ取值范围:0-1,1-10
  • solver:会根据数据自动选择优化方法
    sag:如果数据集、特征都较大,选择该随机梯度下降优化
  • normalize:数据是否进行标准化
    normalize=False:可以在fit之前调用preprocessing.StandardScaler标准化数据
  • Ridge.coef_:回归权重
  • Ridge.intercept_:回归偏置

Ridge 方法相当于 SGDRegressor(penalty=‘l2’),loss=“squared_loss”),
只不过SGDRegressor 实现了一个普通的随机梯度下降学习,推荐使用Ridge(实现了SAG)

正则化程度对结果的影响
在这里插入图片描述

  • 正则化力度越大,权重系数会越小。
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression, SGDRegressor, Ridge
from sklearn.metrics import mean_squared_error


def linner1():
    """
    正规方程的优化方法
    :return:
    """
    # 1)获取数据
    boston = load_boston()

    # 2)划分数据集
    x_train, x_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=22)

    # 3)标准化
    transfer = StandardScaler()
    x_train = transfer.fit_transform(x_train)
    x_test = transfer.transform(x_test)

    # 4)预估器
    estimator = LinearRegression()
    estimator.fit(x_train, y_train)

    # 5)得出模型
    print("正规方程权重系数为:\n", estimator.coef_)
    print("正规方程偏置为:\n", estimator.intercept_)

    # 6)模型评估
    y_predict = estimator.predict(x_test)
    print("预测房价:\n", y_predict)
    error = mean_squared_error(y_test, y_predict)
    print("正规方程-均分误差为:\n", error)

    return None


def linner2():
    """
    梯度下降的优化方法
    :return:
    """
    # 1)获取数据
    boston = load_boston()
    print("特征数量:\n", boston.data.shape)  # 几个特征对应几个权重系数

    # 2)划分数据集
    x_train, x_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=22)

    # 3)标准化
    transfer = StandardScaler()
    x_train = transfer.fit_transform(x_train)
    x_test = transfer.transform(x_test)

    # 4)预估器
    estimator = SGDRegressor(learning_rate="constant", eta0=0.001, max_iter=10000)
    estimator.fit(x_train, y_train)

    # 5)得出模型
    print("梯度下降权重系数为:\n", estimator.coef_)
    print("梯度下降偏置为:\n", estimator.intercept_)

    # 6)模型评估
    y_predict = estimator.predict(x_test)
    print("预测房价:\n", y_predict)
    error = mean_squared_error(y_test, y_predict)
    print("梯度下降-均分误差为:\n", error)

    return None


def linner3():
    """
    岭回归对波斯顿房价进行预测
    :return:
    """
    # 1)获取数据
    boston = load_boston()
    print("特征数量:\n", boston.data.shape)  # 几个特征就有几个系数

    # 2)划分数据集
    x_train, x_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=22)

    # 3)标准化
    transfer = StandardScaler()
    x_train = transfer.fit_transform(x_train)
    x_test = transfer.transform(x_test)

    # 4)预估器
    estimator = Ridge(alpha=0.5, max_iter=10000)  # 参数可调
    estimator.fit(x_train, y_train)

    # 5)得出模型
    print("岭回归-权重系数为:\n", estimator.coef_)
    print("岭回归-下降偏置为:\n", estimator.intercept_)

    # 6)模型评估
    y_predict = estimator.predict(x_test)
    print("预测房价:\n", y_predict)
    error = mean_squared_error(y_test, y_predict)
    print("岭回归-均分误差为:\n", error)

    return None


if __name__ == '__main__':
    linner1()
    linner2()
    linner3()

运行结果

正规方程权重系数为:
 [-0.64817766  1.14673408 -0.05949444  0.74216553 -1.95515269  2.70902585
 -0.07737374 -3.29889391  2.50267196 -1.85679269 -1.75044624  0.87341624
 -3.91336869]
正规方程偏置为:
 22.62137203166228
预测房价:
 [28.22944896 31.5122308  21.11612841 32.6663189  20.0023467  19.07315705
 21.09772798 19.61400153 19.61907059 32.87611987 20.97911561 27.52898011
 15.54701758 19.78630176 36.88641203 18.81202132  9.35912225 18.49452615
 30.66499315 24.30184448 19.08220837 34.11391208 29.81386585 17.51775647
 34.91026707 26.54967053 34.71035391 27.4268996  19.09095832 14.92742976
 30.86877936 15.88271775 37.17548808  7.72101675 16.24074861 17.19211608
  7.42140081 20.0098852  40.58481466 28.93190595 25.25404307 17.74970308
 38.76446932  6.87996052 21.80450956 25.29110265 20.427491   20.4698034
 17.25330064 26.12442519  8.48268143 27.50871869 30.58284841 16.56039764
  9.38919181 35.54434377 32.29801978 21.81298945 17.60263689 22.0804256
 23.49262401 24.10617033 20.1346492  38.5268066  24.58319594 19.78072415
 13.93429891  6.75507808 42.03759064 21.9215625  16.91352899 22.58327744
 40.76440704 21.3998946  36.89912238 27.19273661 20.97945544 20.37925063
 25.3536439  22.18729123 31.13342301 20.39451125 23.99224334 31.54729547
 26.74581308 20.90199941 29.08225233 21.98331503 26.29101202 20.17329401
 25.49225305 24.09171045 19.90739221 16.35154974 15.25184758 18.40766132
 24.83797801 16.61703662 20.89470344 26.70854061 20.7591883  17.88403312
 24.28656105 23.37651493 21.64202047 36.81476219 15.86570054 21.42338732
 32.81366203 33.74086414 20.61688336 26.88191023 22.65739323 17.35731771
 21.67699248 21.65034728 27.66728556 25.04691687 23.73976625 14.6649641
 15.17700342  3.81620663 29.18194848 20.68544417 22.32934783 28.01568563
 28.58237108]
正规方程-均分误差为:
 20.627513763095408
特征数量:
 (506, 13)
梯度下降权重系数为:
 [-0.4624682   0.85310147 -0.41793615  0.79016317 -1.58819125  2.82331698
 -0.12780695 -3.07548708  1.58334804 -0.80580055 -1.67621129  0.83455691
 -3.84880374]
梯度下降偏置为:
 [22.62028956]
预测房价:
 [28.27104476 31.44159001 21.4728419  32.53183536 20.26506978 19.39525836
 21.38172112 19.3477787  19.6073809  32.77869064 21.37633147 27.3912143
 15.79389338 20.13760669 36.97897227 18.61016551 10.01166692 18.67154939
 30.59097488 24.10885559 19.37355226 33.91391289 29.39117624 17.71619742
 34.57977019 26.32813033 33.86000149 27.06748704 19.5009177  15.736943
 30.67530056 14.49837266 37.02110191  9.73859933 16.46229518 17.08997848
  8.38477865 20.00369312 40.12371516 28.97839004 25.14132847 18.20574431
 39.51342779  7.16892105 21.7196911  24.89682853 21.10669613 20.83018292
 16.99262614 26.66990016 10.0825798  26.87716939 30.49497382 17.15987238
 10.17874737 35.28181405 31.02624753 22.87829988 17.59848779 21.65279837
 23.53432266 23.81182395 20.34512093 37.81098585 25.62053014 19.98302905
 14.65848484  7.22478243 42.39993685 21.68205793 17.0716208  22.47016643
 40.71311823 21.56150521 36.70513056 27.03728313 21.67778414 20.85852597
 25.09587788 23.77993142 31.32222824 20.03233066 23.88327297 30.99165591
 27.23204474 21.13838804 28.91204778 21.87776484 26.70676265 18.75707483
 24.7860842  23.88800727 20.26980412 18.6957649  15.97571016 18.52487435
 24.37941164 17.2020443  20.90239453 26.75648401 21.03796392 18.37572114
 24.15367079 23.19510172 20.23324701 36.16214809 16.06795588 22.38996372
 32.32751854 33.83260717 20.51852017 25.74490967 23.5616358  17.77430401
 21.36066117 21.39874147 27.10309735 24.87124652 23.58895189 14.79619662
 16.29260012  4.20777874 29.05812205 20.95408333 22.13772381 27.85120444
 28.34863595]
梯度下降-均分误差为:
 21.760303208876774
特征数量:
 (506, 13)
岭回归-权重系数为:
 [-0.64193209  1.13369189 -0.07675643  0.74427624 -1.93681163  2.71424838
 -0.08171268 -3.27871121  2.45697934 -1.81200596 -1.74659067  0.87272606
 -3.90544403]
岭回归-下降偏置为:
 22.62137203166228
预测房价:
 [28.22536271 31.50554479 21.13191715 32.65799504 20.02127243 19.07245621
 21.10832868 19.61646071 19.63294981 32.85629282 20.99521805 27.5039205
 15.55295503 19.79534148 36.87534254 18.80312973  9.39151837 18.50769876
 30.66823994 24.3042416  19.08011554 34.10075629 29.79356171 17.51074566
 34.89376386 26.53739131 34.68266415 27.42811508 19.08866098 14.98888119
 30.85920064 15.82430706 37.18223651  7.77072879 16.25978968 17.17327251
  7.44393003 19.99708381 40.57013125 28.94670553 25.25487557 17.75476957
 38.77349313  6.87948646 21.78603146 25.27475292 20.4507104  20.47911411
 17.25121804 26.12109499  8.54773286 27.48936704 30.58050833 16.56570322
  9.40627771 35.52573005 32.2505845  21.8734037  17.61137983 22.08222631
 23.49713296 24.09419259 20.15174912 38.49803353 24.63926151 19.77214318
 13.95001219  6.7578343  42.03931243 21.92262496 16.89673286 22.59476215
 40.75560357 21.42352637 36.88420001 27.18201696 21.03801678 20.39349944
 25.35646095 22.27374662 31.142768   20.39361408 23.99587493 31.54490413
 26.76213545 20.8977756  29.0705695  21.99584672 26.30581808 20.10938421
 25.47834262 24.08620166 19.90788343 16.41215513 15.26575844 18.40106165
 24.82285704 16.61995784 20.87907604 26.70640134 20.75218143 17.88976552
 24.27287641 23.36686439 21.57861455 36.78815164 15.88447635 21.47747831
 32.80013402 33.71367379 20.61690009 26.83175792 22.69265611 17.38149366
 21.67395385 21.67101719 27.6669245  25.06785897 23.73251233 14.65355067
 15.19441045  3.81755887 29.1743764  20.68219692 22.33163756 28.01411044
 28.55668351]
岭回归-均分误差为:
 20.641771606180903
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

傅里叶不变坏

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值