本栏目为本人自学B站各位好心的博主所录视频过程中记录下来的笔记,出处基本来自于B站视频博主以及csdn中各位大佬的解释,我只起到了转载的作用。因来源过于复杂,因此无法标注来源。
1.线性回归的改进-岭回归
岭回归,其实也是一种线性回归。只不过在算法建立回归方程时候,加上正则化的限制,从而达到解决过拟合的效果
1.1 岭回归算法API
sklearn.linear_model.Ridge(alpha=1.0, fit_intercept=True,solver=“auto”,normalize=False)
- 具有L2正则化的线性回归
- alpha:正则化力度,也叫λ
- λ取值:0~1 1~10
- solver:会根据数据自动选择优化方法
- sag:如果数据集、特征都比较大,选择该随机梯度下降优化
- normalize:数据是否进行标准化
- normalize=False:可以在fit之前调用preprocessing.StandardScaler标准化数据
- Ridge.coef_:回归权重
- Ridge.intercept_:回归偏置
All last four solvers support both dense and sparse data. However,only ‘sag’ supports sparse input when ‘fit_intercept’ is True.
Ridge方法相当于:
SGDRegressor(penalty=‘12’,loss=“squared_loss”),只不过SGDRegressor实现了一个普通的随机梯度下降学习,推荐使用Ridge(实现了SAG)
- sklearn.linear_model.Ridgecv( _BalseRidgeCV, RegressorMixin)。
- 具有L2正则化的线性回归,可以进行交叉验证
- coef_:回归系数
class _BaseRidgecv( LinearModel):
def _init__(self, alphas=(0.1,1.0,10.0),fit_intercept=True,normalize=False,scoring=None,cv=None,gcv_mode=None,store_cv_values=False):
用岭回归对波士顿放假进行预测:
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression,SGDRegressor,Ridge
from sklearn.metrics import mean_squared_error
def Ridge_regression():
"""
岭回归的方式对波士顿房价进行预测
:return: None
"""
# 1)获取数据
boston = load_boston()
# 2)划分数据集
x_train, x_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=5)
# 3)标准化
transfer = StandardScaler()
x_train = transfer.fit_transform(x_train)
x_test = transfer.transform(x_test)
# 4)预估器
estimator = Ridge()
estimator.fit(x_train, y_train)
# 5)得出模型
print("岭回归权重系数为:\n", estimator.coef_)
print("岭回归偏置为:\n", estimator.intercept_)
# 6)模型评估
prediction = estimator.predict(x_test)
print("预测房价为:\n", prediction)
error = mean_squared_error(y_test, prediction)
print("岭回归的均方误差为:\n", error)
if __name__ == '__main__':
# 岭回归
Ridge_regression()